Does familiarity facilitate the cortical processing of music sounds?

University of Bonn, Bonn, North Rhine-Westphalia, Germany
Neuroreport (Impact Factor: 1.52). 11/2004; 15(16):2471-5. DOI: 10.1097/00001756-200411150-00008
Source: PubMed


Automatic cortical sound discrimination, as indexed by the mismatch negativity (MMN) component of the auditory evoked potential, is facilitated for familiar speech sounds (phonemes). In musicians as compared to non-musicians, an enhanced MMN has been observed for complex non-speech sounds. Here, musically trained subjects were presented with sequences of either familiar (tonal) or structurally matched unfamiliar (atonal) triad chords, both with either fixed or randomly transposed chord root pitch. The MMN elicited by deviant chords did not differ for familiar and unfamiliar triad sequences, and was undiminished even to unfamiliar deviant sounds which were consciously undetectable. Only subsequent attention-related components indicated facilitated cognitive processing of familiar sounds, corresponding to higher behavioral detection scores.

Full-text preview

Available from:
  • Source
    • "More specifically, a long-term training effect on temporal processing has been found by Jongsma et al. (2005) reporting higher N150 amplitudes time-locked to an auditory temporal omission. Yet, there are studies that do not find training effects in auditory processing of pure tones, familiar or unfamiliar chords, or the violation of temporal irregularity (Lutkenhoner et al., 2006; Neuloh and Curio, 2004; van Zuijen et al., 2005). Thus, it appears that the literature on this matter is yet inconclusive and that possible training differences might occur, although not in every aspect of temporal processing. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The two main characteristics of temporal structuring in music are meter and rhythm. The present experiment investigated the event-related potentials (ERP) of these two structural elements with a focus on differential effects of attended and unattended processing. The stimulus material consisted of an auditory rhythm presented repetitively to subjects in which metrical and rhythmical changes as well as pitch changes were inserted. Subjects were to detect and categorize either temporal changes (attended condition) or pitch changes (unattended condition). Furthermore, we compared a group of long-term trained subjects (musicians) to non-musicians. As expected, behavioural data revealed that trained subjects performed significantly better than untrained subjects. This effect was mainly due to the better detection of the meter deviants. Rhythm as well as meter changes elicited an early negative deflection compared to standard tones in the attended processing condition, while in the unattended processing condition only the rhythm change elicited this negative deflection. Both effects were found across all experimental subjects with no difference between the two groups. Thus, our data suggest that meter and rhythm perception could differ with respect to the time course of processing and lend credence to the notion of different neurophysiological processes underlying the auditory perception of rhythm and meter in music. Furthermore, the data indicate that non-musicians are as proficient as musicians when it comes to rhythm perception, suggesting that correct rhythm perception is crucial not only for musicians but for every individual.
    Full-text · Article · Dec 2008 · Cortex
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Electroencephalographic data suggest that spoken words produce an enhanced output of the brain's automatic deviance detection system, as reflected by the mismatch negativity. Using meaningful and nonmeaningful whistles, we sought to distinguish the effect of semantic content on the brain's deviance detection system from language-specific stimulus features. In the meaningful condition, study participants heard a human 'wolf whistle', which is commonly interpreted as an unsolicited expression of sexual attention. In the nonmeaningful condition participants heard an acoustically identical, but digitally rearranged, version of the wolf whistle. The mismatch negativity amplitude was significantly larger when the infrequent stimulus was meaningful than when it was meaningless. These data suggest that enhanced mismatch negativity magnitude was due to the semantic valence of the eliciting deviant.
    Preview · Article · Sep 2005 · Neuroreport
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: To increase our understanding of auditory neurocognition in musicians, we compared nonmusicians with amateur band musicians in their neural and behavioral sound encoding accuracy. Mismatch negativity and P3a components of the auditory event-related potentials were recorded to changes in basic acoustic features (frequency, duration, location, intensity, gap) and abstract features (melodic contour and interval size). Mismatch negativity was larger in musicians than in nonmusicians for location changes whereas no statistically significant group difference was observed in response to other feature changes or in abstract-feature mismatch negativity. P3a was observed only in musicians in response to location changes. This suggests that when compared with nonmusicians, even amateur musicians have neural sound processing advantages with acoustic information most essential to their musical genre.
    Full-text · Article · Aug 2006 · Neuroreport
Show more