Joel S. Snyder’s research while affiliated with University of Nevada, Las Vegas and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (108)


Figure 1. 10 s excerpts of the 36 s auditory stimulus (x axis: time, y axis: sound amplitude).
Figure 2. Average probe task accuracy (top) and average imagery success rating (bottom)
Figure 3. Lab-level average of the steady state-evoked potentials (SSEPs) elicited by the 2.4 Hz
Registered Report: Replication and Extension of Nozaradan, Peretz, Missal and Mouraux (2011)
  • Preprint
  • File available

March 2025

·

16 Reads

Karli M. Nave

·

·

Joel S. Snyder

Cognitive neuroscience research has attempted to disentangle stimulus-driven processing from conscious perceptual processing for decades. Some prior evidence for neural processing of perceived musical beat (periodic pulse) may be confounded by stimulus-driven neural activity. However, one study used frequency tagging, which measures electrical brain activity at frequencies present in a stimulus, to show increased brain activity at imagery-related frequencies when listeners imagined a metrical pattern while listening to an isochronous auditory stimulus (Nozaradan et al., 2011) in a manner that controlled for stimulus factors. It is unclear though whether this represents repeatable evidence for conscious perception of beat and whether the effect is influenced by relevant music experience, such as music and dance training. This registered report details the results of 13 independent conceptual replications of Nozaradan et al. (2011), all using the same vetted protocol. Listeners performed the same imagery tasks as in Nozaradan et al. (2011), with the addition of a behavioral task on each trial to measure conscious perception. Meta-analyses examined the effect of imagery condition, revealing smaller raw effect sizes (Binary: 0.03 uV, Ternary: 0.03 uV) than in the original study (Binary: 0.12 uV, Ternary: 0.20 uV) with no moderating effects of music or dance training. The difference in estimated effects sizes (this study: n = 152, ηp2 =.03 - .04; 2011 study: n = 8, ηp2 =.62 - .76) suggests that large sample sizes may be required to reliably observe these effects, which challenges the use of frequency tagging as a method to study (neural correlates of) beat perception. Furthermore, a binary logistic regression on individual trials revealed that only neural activity at the stimulus frequency predicted performance on the imagery-related task; contrary to our hypothesis, the neural activity at the imagery-related frequency was not a significant predictor. We discuss possible explanations for discrepancies between these findings and the original study and implications of the extensions provided by this registered report.

Download

Object and Setting Identification in Natural Auditory Scenes

March 2025

·

2 Reads

Margaret Ann McMullin

·

·

·

[...]

·

Joel S. Snyder

We encounter situations each day that require our auditory system to quickly interpret our surroundings. Auditory scene perception involves complex processes that allow us to identify both the setting and the objects within a scene, which are essential for decision-making, and situational awareness. While there is substantial evidence for distinct cortical regions and pathways supporting visual scene and object recognition, far less is known about how the brain processes complex auditory scenes and objects. This study aimed to determine whether distinct mechanisms underlie auditory setting and object identification and whether these mechanisms interact to aid perception. Participants listened to 200 natural auditory scenes of varying durations (1, 2, and 4 sec) and identified the setting (e.g., café) as well as the objects (e.g., talking, espresso machine, music) within each scene. Overall, performance was highest on the object identification task, and there was a significant interaction between task and scene duration, with a greater benefit of longer durations for the object identification task. Different low- and mid-level acoustic features of the scenes predicted performance on the two tasks. These results suggest that the auditory system employs distinct (and potentially interactive) computations for setting and object identification, allowing for quick interpretation of complex, real-world auditory scenes. The interaction between task performance and scene duration further suggests object and setting identification may operate on different temporal scales; objects might require more time for accurate individuation, while settings may be recognized more efficiently through global properties of scenes, such as openness or naturalness.


Object and Setting Identification in Natural Auditory Scenes

February 2025

·

11 Reads

We encounter situations each day that require our auditory system to quickly interpret our surroundings. Auditory scene perception involves complex processes that allow us to identify both the setting and the objects within a scene, which are essential for decision-making, and situational awareness. While there is substantial evidence for distinct cortical regions and pathways supporting visual scene and object recognition, far less is known about how the brain processes complex auditory scenes and objects. This study aimed to determine whether distinct mechanisms underlie auditory setting and object identification and whether these mechanisms interact to aid perception. Participants listened to 200 natural auditory scenes of varying durations (1, 2, and 4 sec) and identified the setting (e.g., café) as well as the objects (e.g., talking, espresso machine, music) within each scene. Overall, performance was highest on the object identification task, and there was a significant interaction between task and scene duration, with a greater benefit of longer durations for the object identification task. Different low- and mid-level acoustic features of the scenes predicted performance on the two tasks. These results suggest that the auditory system employs distinct (and potentially interactive) computations for setting and object identification, allowing for quick interpretation of complex, real-world auditory scenes. The interaction between task performance and scene duration further suggests object and setting identification may operate on different temporal scales; objects might require more time for accurate individuation, while settings may be recognized more efficiently through global properties of scenes, such as openness or naturalness.


Misophonia reactions in the general population are correlated with strong emotional reactions to other everyday sensory–emotional experiences

July 2024

·

97 Reads

·

4 Citations

Misophonic experiences are common in the general population, and they may shed light on everyday emotional reactions to multi-modal stimuli. We performed an online study of a non-clinical sample to understand the extent to which adults who have misophonic reactions are generally reactive to a range of audio-visual emotion-inducing stimuli. We also hypothesized that musicality might be predictive of one's emotional reactions to these stimuli because music is an activity that involves strong connections between sensory processing and meaningful emotional experiences. Participants completed self-report scales of misophonia and musicality. They also watched videos meant to induce misophonia, autonomous sensory meridian response (ASMR) and musical chills, and were asked to click a button whenever they had any emotional reaction to the video. They also rated the emotional valence and arousal of each video. Reactions to misophonia videos were predicted by reactions to ASMR and chills videos, which could indicate that the frequency with which individuals experience emotional responses varies similarly across both negative and positive emotional contexts. Musicality scores were not correlated with measures of misophonia. These findings could reflect a general phenotype of stronger emotional reactivity to meaningful sensory inputs. This article is part of the theme issue ‘Sensing and feeling: an integrative approach to sensory processing and emotional experience’.




Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception

March 2024

·

47 Reads

·

2 Citations

Open Mind

Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field’s ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33–0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants’ ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.


An #EEGManyLabs study to test the role of the alpha phase on visual perception (a replication and new evidence)

December 2023

·

48 Reads

Several studies have suggested that low-frequency brain oscillations could be key to understanding how the brain samples sensory information via rhythmic alternation of low and high excitability periods. However, this hypothesis has recently been called into question following the publication of some null findings. As part of the #EEGManyLabs initiative, we set out to undertake a high-powered, multi-site replication of an influential study on this topic. In the original study, Mathewson et al. (2009) showed that during high amplitude fluctuations of alpha activity (8-13 Hz), the visibility of a visual target stimulus depended on the time the target was presented relative to the phase of the pre-target alpha activity. Furthermore, visual evoked potentials (e.g., N1, P1, P2 and P3) were larger in amplitude when the target was presented at the pre-stimulus alpha peaks, which were also associated with higher visibility. If we are successful in replicating the results of Mathewson et al. (2009), we intend to extend the original findings by conducting a second, original, experiment that varies the pre-stimulus time unpredictably to determine whether the phase-behavioural relationship depends on the target stimulus having a predictable onset time.


Figure 1. The speech ABA paradigm and behavioral performance
Figure 2. Steady-state evoked neural oscillations and distribution of alpha power (A) Group mean temporal spectral evolution (TSE) time-locked on the onset of the contextual cue when the difference between the first formant (Df 1 ) was small, intermediate, or large. The spectrograms show oscillatory activity from the midline central-parietal electrode (CPz). (B) Isocontour maps for the mean alpha power (8-13 Hz) between 1-7 s and 9.5-15.5 s, representing responses during the contextual cue vs. ambiguous sequence, respectively.
Figure 3. Context-driven neural oscillations and distribution of alpha power (A) Group mean TSE time-locked to the onset of the ambiguous sequence preceded by large (top) and small (middle) Df 1 , and the corresponding difference (bottom). The spectrograms show oscillatory activity from the left parietal electrode (P5).The rectangle highlights the maximum difference in alpha power. (B) Isocontour maps showing the mean alpha power (8-13 Hz) distribution for the difference between larger and small Df 1 during the 12.6-13.6 s interval.
Neural Alpha Oscillations index Context-Driven Perception of Ambiguous Vowel Sequences

November 2023

·

47 Reads

·

4 Citations

iScience

Perception of bistable stimuli is influenced by prior context. In some cases, the interpretation matches with how the preceding stimulus was perceived; in others, it tends to be the opposite of the previous stimulus percept. We measured high-density electroencephalography (EEG) while participants were presented with a sequence of vowels that varied in formant transition, promoting the perception of one or two auditory streams followed by an ambiguous bistable sequence. For the bistable sequence, participants were more likely to report hearing the opposite percept of the one heard immediately before. This auditory contrast effect coincided with changes in alpha power localized in the left angular gyrus and left sensorimotor and right sensorimotor/supramarginal areas. The latter correlated with participants’ perception. These results suggest that the contrast effect for a bistable sequence of vowels may be related to neural adaptation in posterior auditory areas, which influences participants’ perceptual construal level of ambiguous stimuli.


Misophonia reactions in the general population are correlated with strong emotional reactions to other everyday sensory-emotional experiences

October 2023

·

17 Reads

·

2 Citations

Misophonic experiences are common in the general population, and they may shed light on everyday emotional reactions to multi-modal stimuli. We performed an online study of a non-clinical sample to understand the extent to which adults who have misophonic reactions are generally reactive to a range of audio-visual emotion-inducing stimuli. We also hypothesized that musicality might be predictive of one’s emotional reactions to these stimuli because music is an activity that involves strong connections between sensory processing and meaningful emotional experiences. Participants completed self-report scales of misophonia and musicality. They also watched videos meant to induce misophonia, autonomous sensory meridian response (ASMR), and musical chills, and were asked to click a button whenever they had any emotional reaction to the video. They also rated the emotional valence and arousal of each video. Reactions to misophonia videos were predicted by reactions to ASMR and chills videos, which could indicate that the frequency with which individuals experience emotional responses varies similarly across both negative and positive emotional contexts. Musicality scores were not correlated with measures of misophonia. These findings could reflect a general phenotype of stronger emotional reactivity to meaningful sensory inputs.


Citations (66)


... As the lack of capacity to experience musical pleasure can be tied to reduced connectivity between the auditory cortex and the subcortical reward network (i.e., 'musical anhedonia', see Martínez-Molina et al., 2016), misophonia may also be related to abnormalities in the wider auditory system. In fact, some evidence suggests that the frequency with which individuals experience emotional responses to sound covaries across both negative (i.e., misophonic triggers) and positive (i.e., musical) emotional contexts (Mednicoff et al., 2024). ...

Reference:

Toward cognitive models of misophonia
Misophonia reactions in the general population are correlated with strong emotional reactions to other everyday sensory–emotional experiences

... Whether it is an infant's interaction with rhythm [20], [21] or the performance of a professional symphony [22], rhythm perception has been a longstanding subject of interest in neuroscience [23], [24]. Notably, the tendency of embodied systems to anticipate the beat earlier than the actual beat timing [18] has framed this problem as a predictive coding task. ...

Theoretical and empirical advances in understanding musical rhythm, beat and metre
  • Citing Article
  • May 2024

Nature Reviews Psychology

... A characterisation of modulation statistics in this ecologically-valid context should be useful for future psychoacoustical, neuroscientific and ethological studies aiming at testing efficient-coding principles and/or identifying auditory cues and sensory mechanisms used by human and non-human animals for discriminating habitats and their global features (as in Apoux et al., 2023;McMullin et al., 2024;Miller-Viacava et al., 2024;Thoret et al., 2020). It should also be useful for ecoacoustical ...

Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception
  • Citing Article
  • March 2024

Open Mind

... We used Classical Low Resolution Electromagnetic Tomography Analysis Recursively Applied (Iordanov et al., 2016;Iordanov et al., 2014; (Berg & Scherg, 1994) to determine the intracerebral sources that account for continuous vs. discrete listening strategies in speech categorization (e.g., Alain et al., 2023;Bidelman et al., 2018;Carter et al., 2022). Source images were computed for endpoint (vw1/vw5) tokens within the 140-320 ms (~P2 wave) analysis window, where task and noise effects were maximal in the scalp ERPs (see . ...

Neural Alpha Oscillations index Context-Driven Perception of Ambiguous Vowel Sequences

iScience

... This suggests to me that, for Patel, "spontaneous", in the context of rhythmic synchronization, primarily denotes a form of natural (not formal) learning, akin to first language acquisition, a perspective with which I agree. While recent evidence aligns with this perspective (e.g., [22]), the term "spontaneous" might not accurately convey the intended message, particularly given its usage as synonymous with "involuntary" or "reflexive" in the broader neuroscience field. ...

Sustained Musical Beat Perception Develops Into Late Childhood and Predicts Phonological Abilities

... Intermediation by the integrated perception. Existing research, without discriminating the foreground or the background, has reported the fast switching intermediated by the integrated perception (13,17,41,86). Reference (72) elucidates auditory tristability using "energy landscapes" and hints at switching intermediated by integrated perception. ...

Adaptation in the sensory cortex drives bistable switching during auditory stream segregation

Neuroscience of Consciousness

... They also noted that inconsistencies in existing groove research could stem from varying methodologies, musical repertoires, and participants' cultural backgrounds. For example, groove perception may be shaped by individual differences in dance experience (O'Connell et al., 2022). ...

Elements of musical and dance sophistication predict musical groove perception

... We focus on models which address 'what' and 'why' questions concerning specific differences in perception and cognition in people with misophonia as compared with normative controls. For the most part, we avoid including etiological factors, as there is little work available to date that bears on development of misophonia (but see Mednicoff et al., 2022 for an overview of that which exists, and Palumbo et al., 2018 for a summary of how associative and non-associative learning principles may be relevant for the development, maintenance, and perhaps treatment of misophonia). Additionally, creating models of a single state, such as having a disorder in adulthood, seems like a logical precursor to models of dynamic states, such as the process of developing a disorder. ...

Auditory affective processing, musicality, and the development of misophonic reactions

... Ample evidence supports the association between early rhythmic skills, language and literacy. Individual differences in children's non-linguistic rhythmic skills have been shown to relate to language and literacy outcomes in several studies with typically [1][2][3][4][5][6][7] and atypically developing children, including individuals diagnosed with developmental dyslexia (DD) [8][9][10][11][12][13][14][15] and developmental language disorder (DLD) [16][17][18][19] . While phonological awareness 20-22 , short-term verbal memory [23][24][25] and rapid automatized naming (RAN) 22,26 are considered reliable predictors of reading outcomes, many studies have found rhythmic skills to account for unique variance of literacy skills 5,7,27 . ...

Sustained musical beat perception develops into late childhood and predicts phonological abilities
  • Citing Preprint
  • January 2022

... Limb movements and specific tasks require executive functions [48], which are primarily related to the brain's frontal lobe. Additionally, the frontal lobe also plays a vital role in cognitive function and rote memory [49,50], which highlights its importance in stroke rehabilitation. Passive and active-assisted motor movement promote neuroplasticity, particularly in the frontal lobe [51]. ...

Going Beyond Rote Auditory Learning: Neural Patterns of Generalized Auditory Learning