ArticlePDF Available

Musical syntax is processed in Broca's area: An MEG study


Abstract and Figures

The present experiment was designed to localize the neural substrates that process music-syntactic incongruities, using magnetoencephalography (MEG). Electrically, such processing has been proposed to be indicated by early right-anterior negativity (ERAN), which is elicited by harmonically inappropriate chords occurring within a major-minor tonal context. In the present experiment, such chords elicited an early effect, taken as the magnetic equivalent of the ERAN (termed mERAN). The source of mERAN activity was localized in Broca's area and its right-hemisphere homologue, areas involved in syntactic analysis during auditory language comprehension. We find that these areas are also responsible for an analysis of incoming harmonic sequences, indicating that these regions process syntactic information that is less language-specific than previously believed.
Content may be subject to copyright.
540 nature neuroscience • volume 4 no 5 • may 2001
It seems plausible that music, like language, has a syntax: both
have a structure based on complex rules. However, how a musi-
cal syntax may be described has remained a matter of debate
To investigate the processing of musical syntax, EEG studies
have taken advantage of the listener’s ability to expect specific
musical events according to a preceding musical context, and to
detect violations of harmonic expectancies within a musical
sequence. This ability may be an indication that a musical syn-
tax exists, mainly because the specificity of harmonic expectancy
corresponds to the degree of harmonic relatedness as described by
music theory
. That is, subjects expect to hear in sequence
harmonically related but not harmonically unrelated chords.
Event-related brain potentials (ERPs) elicited by syntactic
incongruities in language and music were compared in a pre-
vious study
. In that study, harmonic incongruities were inter-
preted as grammatical incongruity in music. It was shown that
both musical and linguistic structural incongruities elicit pos-
itivities with a latency of about 600 ms (the so-called P600)
that are statistically indistinguishable. The P600 reflects more
general knowledge-based structural integration during the per-
ception of rule-governed sequences. Additionally, a negative
music-specific ERP component with a latency of around
350 ms and an anterior right-hemisphere lateralization was
observed. This right anterio temporal negativity (RATN) was
elicited by out-of-key chords, and taken to reflect the applica-
tion of music-syntactic rules.
In another EEG study
, harmonically unrelated and func-
tionally inappropriate chords occurred within sequences of in-
key chords. Sequences consisting of in-key chords were composed
to build up a musical context, which correlates in listeners with
the buildup of strong expectancies to hear harmonically appro-
priate chords in sequence
. The principles that form the basis
of these expectancies have been described as a ‘hierarchy of har-
monic stability’
, and correspond to the theory of harmony
Harmonically appropriate chords are tonally related chords or
chord functions that fit well at certain positions in a musical con-
text (for example, a tonic chord at the end of a sequence)
. Inap-
propriate chords elicited an early right-anterior negativity
(ERAN). Notably, such chords were consonant major chords; it
was only the musical context that made them sound unexpect-
ed. Within the musical context, they could only be differentiated
from the in-key chords by the application of (implicit) musical
knowledge about the principles of harmonic relatedness
described by music theory. These principles or rules of music
theory may be thought of as musical syntax
Here we used the same experimental protocol as the preced-
ing EEG study
. Participants (all ‘non-musicians’) were present-
ed with directly succeeding chord sequences, each consisting of
five chords (Fig. 1). Sequences consisting exclusively of in-key
chords (cadences) established a musical context toward the end
of each sequence (Fig. 1a). Due to the buildup of musical con-
text, harmonic expectancies that were most specific at the end
of each sequence were generated in listeners. Besides the in-key
chord sequences, however, some sequences contained harmoni-
cally unexpected chords: a ‘Neapolitan sixth chord’ occurred at
the third position in 25%, and at the fifth position in another
25% of all sequences (Fig. 1b and c). This chord is a variation of
the subdominant, and contains two out-of-key notes, although
the chord itself is major and consonant.
Compared to in-key chords, chords containing out-of-key
(‘non-diatonic’) notes are, in music-theoretical terms, more
distant from the tonal center, and therefore perceived as unex-
. As noted before, the ability of listeners to
expect chords according to their harmonic relatedness to a pre-
ceding harmonic context has been proposed to reflect the exis-
tence of a musical syntax. Because the Neapolitan chords
violated the expectancy for tonally related chords to follow,
effects elicited by the Neapolitan chords were thus proposed
to reflect music-syntactic processing. Because of the musical
context buildup, the harmonic expectancies of listeners were
violated to a higher degree at the fifth position (where the
expectancies were most specific) compared to the third posi-
tion of a sequence. Therefore, the effects of Neapolitan chords
were proposed to be larger at the fifth compared to the third
position. In addition, from a music-theoretical perspective,
Neapolitan chords function harmonically as a subdominant
Musical syntax is processed in
Broca’s area: an MEG study
Burkhard Maess, Stefan Koelsch, Thomas C. Gunter and Angela D. Friederici
Max Planck Institute of Cognitive Neuroscience, PO Box 500 355, D-04303, Leipzig, Germany
Correspondence should be addressed to B.M. (
The present experiment was designed to localize the neural substrates that process music-syntactic
incongruities, using magnetoencephalography (MEG). Electrically, such processing has been
proposed to be indicated by early right-anterior negativity (ERAN), which is elicited by harmonically
inappropriate chords occurring within a major-minor tonal context. In the present experiment, such
chords elicited an early effect, taken as the magnetic equivalent of the ERAN (termed mERAN). The
source of mERAN activity was localized in Broca’s area and its right-hemisphere homologue, areas
involved in syntactic analysis during auditory language comprehension. We find that these areas are
also responsible for an analysis of incoming harmonic sequences, indicating that these regions
process syntactic information that is less language-specific than previously believed.
© 2001 Nature Publishing Group
© 2001 Nature Publishing Group
nature neuroscience • volume 4 no 5 • may 2001 541
variation; a Neapolitan chord at the third position of the
sequence was, functionally, fairly suitable (because a subdom-
inant in that position was appropriate), whereas a Neapolitan
at the fifth position was functionally inappropriate (because
only a tonic chord would be appropriate in that position).
Thus, a Neapolitan chord as presented here may be taken as
‘music-syntactically’ incongruous on the basis of both music-
psychological (with respect to harmonic expectations) and music-
theoretical reasoning (with respect to harmonic chord functions
and rules). From both perspectives, the degree of music-syntac-
tic incongruity is higher for Neapolitans at the fifth compared to
the third position. In the present study, we show that the mag-
netic effect elicited by the Neapolitans was stronger at the fifth
compared to the third position, indicating that this effect reflects
music-syntactic processing. This effect was generated in both
hemispheres in the inferior pars opercularis, known in the left
hemisphere as Broca’s area.
In-key chords elicited a large mean global field power (MGFP, a
measure of the strength of an evoked field), present in all sub-
jects at around 200 ms (relative to
stimulus onset, Fig. 2a). (This mag-
netic effect will henceforth be
referred to as the P2m.) Brain responses elicited from Neapolitan
and in-key chords in the fifth position clearly differed (Fig. 2b).
Neapolitan chords elicited a particular early magnetic field effect,
which was, at any sensor, nearly uni-modal over time, and was
largest around 200 ms (like the P2m). This effect (henceforth
referred to as the mERAN) can best be seen in the difference
waves of Fig. 2b. Virtually no magnetic effects were observable
after around 350 ms, for Neapolitans or for in-key chords.
The field maps of both P2m and mERAN reveal a dipolar
pattern over each hemisphere (Fig. 3a and b). In all subjects,
the fields of the mERAN had virtually an inversed ‘polarity’
compared to the fields of the P2m. Moreover, the steepest field
gradients of the mERAN are anterior to those of the P2m, indi-
cating that the neural generators of the mERAN are anterior
to those of the P2m.
Effects elicited by Neapolitan chords at the third and fifth
position were very similar in distribution and time course; how-
ever, the third-position effects were distinctly smaller (about half
of the strength of fifth-position effects, Figs. 2c and 3c). The
MGFP of the mERAN (in-key chord signals subtracted from
Neapolitan chord signals, Fig. 4) elicited at the third position dif-
fered significantly from the MGFP of the mERAN elicited at the
fifth position (paired t-test; t = 5.69, p = 0.005). (MGFP was cal-
culated for third and fifth position for each subject separately in
the time window from 170–210 ms.)
Dipole solutions
Dipole solutions for the P2m and the mERAN elicited at the fifth
position were obtained from each subject. (The signal-to-noise
ratio (SNR) of the effects elicited by Neapolitan chords at the
third position was too low to calculate reliable dipole solutions;
see Methods.) Then, locations of dipoles were transformed into
a Talairach-sized standard brain, and averaged across subjects.
For the P2m, one dipole was located in each hemisphere within
the middle part of Heschl’s gyrus (in the superior temporal
Fig. 1. Examples of chord sequences. (a) Cadences consisting exclu-
sively of in-key chords. (b) Chord sequences containing a Neapolitan
sixth chord at the third position. (c) Chord sequences containing a
Neapolitan at the fifth position; Neapolitan chords are indicated by
arrows. (d) Example of directly succeeding chord sequences as pre-
sented in the experiment.
Fig. 2. Time courses of magnetic field
strength. Data were chosen from two
representative subjects at four sensors
located in the magnetic field maxima.
(a) P2m time course elicited by in-key
chords. (b) Signals evoked by chords at
the fifth position, plotted separately for
Neapolitan (dashed lines) and in-key
chords (dotted lines). The effect elicited
by Neapolitan chords (mERAN) is indi-
cated by the solid lines (difference wave,
Neapolitan chord signals subtracted
from in-key chord signals); this effect
was maximal around 200 ms. (c) Signals
evoked by chords at the third position
(line designations as in b).
b c
© 2001 Nature Publishing Group
© 2001 Nature Publishing Group
542 nature neuroscience • volume 4 no 5 • may 2001
Fig. 3. P2m and mERAN, magnetic field maps. The
maps of the mERAN were calculated by subtracting
the event-related magnetic fields (ERFs) elicited by
in-key chords from the ERFs of Neapolitan chords.
gyrus), which corresponds to Brodmanns area (BA) 41 (Fig. 5).
The dipole solution for the mERAN indicated, in each hemi-
sphere, one dipole located in the inferior part of the pars oper-
cularis (in the inferior frontal gyrus, part of BA 44; Fig. 5). The
residual normalized variance of dipole solutions was, on aver-
age, 5% for the mERAN and 4% for the P2m.
The generators of the mERAN were located approximately
2.5 cm anteriorly, and 1.0 cm superiorly with respect to the
generators of the P2m (Table 1). The generators of both the
P2m and the mERAN appear to have a stronger dipole moment
in the right than in the left hemisphere; a right-hemispheric
predominance of the mERAN was present in four of six sub-
jects. However, statistical analysis did not reveal a hemispher-
ic difference of effects.
To test whether the dipole locations of mERAN and P2m
differed significantly, y- and z-coordinates of dipoles were ana-
lyzed separately using ANOVAs with condition (P2m ×
mERAN) and hemisphere (left × right dipoles) as factors. Both
ANOVAs for y- and z-coordinates yielded an effect of condi-
tion (y-coordinates, F
= 37.2, p < 0.005; z-coordinates,
= 21.5, p < 0.01), indicating that the mERAN is generat-
ed anteriorly and superiorly to the P2m.
Only very small P1 and N1 were elicited by all chords. This is
presumably a consequence of the continuous stimulus presenta-
tion, in which one chord directly followed the other; the onset
of each chord was not an abrupt change in loudness. Particular-
ly, the N1 is thought to correspond to transient detection, because
the N1 is evoked by sudden changes in the level of energy imping-
ing on the sensory receptors
In-key chords elicited a distinct magnetic field effect, which was
maximal around 200 ms (P2m). The P2m is suggested here as
the magnetic equivalent of the electrical P2, because of its time
course, its ‘polarity,’ and the location and orientation of its gen-
erators. (The generators would produce a positive electrical
potential over fronto-central scalp regions.) The average of the
transformed individual dipole solutions yielded two generators of
the P2m, one located in each hemisphere in
the middle of Heschl’s gyrus (that is, within
or near the primary auditory cortex, near the
generators of the P1m
and the N1m
The dipole of the P2m tended to have a
stronger dipole moment in the right compared
to the left hemisphere. This finding might
reflect a preference of the right hemisphere for
the processing of tones and chords
Neapolitan chords at the fifth position of
the chord sequences elicited magnetic fields
that differed distinctly from those elicited by
in-key chords at the same position (although
participants were instructed to ignore the
harmonies; see Methods). Neapolitan chords
elicited an early magnetic field effect that was
maximal around 200 ms, the mERAN. The mERAN is regard-
ed here as the magnetic equivalent of the (electrical) ERAN.
Four findings support this assumption. First, the mERAN was
sensitive to harmonically inappropriate chords. Second, the
time course of the mERAN was virtually identical to the time
course of the ERAN
. Third, in all subjects, the fields of the
mERAN had an inversed polarity compared to the fields of the
P2m (corresponding to the ERAN and the P2). Fourth, the
mERAN is, like the ERAN, considerably smaller (about 50%)
when elicited by Neapolitan chords at the third versus the fifth
position (see below).
In contrast to the P2m, the generators of the mERAN were
not located within the temporal lobe. The mERAN was generat-
ed approximately 2.5 cm anterior to and 1.0 cm superior to the
P2m in both hemispheres, namely, in each hemisphere within
the inferior part of BA 44 (inferior part of the pars opercularis).
In the left hemisphere, this is known as Brocas area.
The mERAN, like the ERAN, is suggested to reflect the
brains response to a harmonic context violation. Chords pre-
ceding the Neapolitan chords at the fifth position strongly estab-
lished a tonal key. During such a sequence, listeners build up a
‘hierarchy of harmonic stability’
, which induces strong har-
monic expectations for harmonically appropriate chords to fol-
low. At the fifth position of the chord sequences, a tonic chord
was harmonically most appropriate. Instead of a tonic, a
Neapolitan chord occurred, which contained out-of-key notes
and therefore sounded unexpected in the established tonal envi-
. Moreover, the remaining in-key note of the
Neapolitan chords (in C major, f ) was also unexpected, because
it does not belong to the tonic chord. The ability to perceive
distances between chords (and
keys, respectively) and to
expect certain harmonies (and
harmonic functions) to a
higher or lower degree can
only rely on a representation
of the principles of harmonic
relatedness described by music
Fig. 4. Mean global field power signals of the mERAN (MGFP averaged over all MEG channels and all subjects).
The MGFP was significantly stronger (shaded area) at the fifth position versus the third position.
0 200
ERANm, 5th chord
ERANm, 3rd chord
a b c
© 2001 Nature Publishing Group
© 2001 Nature Publishing Group
nature neuroscience • volume 4 no 5 • may 2001 543
theory. These principles, or rules, were reflected in the harmonic
expectancies of listeners and may be interpreted as musical syn-
tax (see Introduction).
The mERAN was distinctly larger when elicited at the fifth
compared to the third position. This finding supports the hypoth-
esis that the mERAN reflects music-syntactic processing, because
the degree of music-syntactic incongruity was higher at the fifth
compared to the third position. Because of the musical context
buildup, which was more specific at the end of the sequence, and
because of the inappropriate chord function of a Neapolitan at
the fifth position (subdominant-variation instead of tonic), the
musical syntax was violated to a higher degree at the fifth com-
pared to the third position.
mERAN and MMNm
The mERAN is generated more anteriorly than the mismatch
negativity (MMN) or its magnetic equivalent (MMNm).
Whereas the MMN receives its main contributions from gen-
erators located in temporal areas
, we found that the mERAN
was generated in the frontal lobe. Frontal contributions to the
MMN have been reported for the frequency-MMN and EEG,
but not for MEG
, Brocas area or its homologue
. More-
over, Neapolitan chords were not physical oddballs (no physi-
cal regularity preceded the Neapolitan chords); thus no
frequency- or spectral-MMN could be elicited. Therefore,
results support the hypothesis that the mERAN is not an
MMN, at least not the ‘classical’ MMN
The Neapolitan chords at the third and fifth position were
physically identical. Therefore, it could only be the degree of
music-syntactic incongruity, referring only to music-theoretical
terms, that modulated the amplitude of the mERAN. That is, the
finding that the mERAN is larger when elicited at the fifth posi-
tion again strongly supports the hypothesis that the mERAN is
not an MMN. Rather, the results indicate that the mERAN is
specifically correlated with the processing of auditory informa-
tion within a complex rule-based context.
Inferior BA 44 and syntax processing in language
Brocas area and its right homologue, particularly the inferior
part of BA 44, are involved in the processing of syntactic aspects
during language comprehension
, and are specialized for fast
and automatic syntactic parsing processes
. The early left ante-
rior negativity (ELAN) reflecting these processes
is also gen-
erated, at least partially, in Brocas area and its right-hemisphere
. The dipole solution for the magnetic ELAN reveals
dipoles in the left and the right inferior frontal cortex (with very
similar locations as the mERAN) in addition to bilateral tempo-
ral dipoles
. As described in the Introduction, the ERAN high-
ly resembles the ELAN, though with a different hemispheric
The present results indicate that Brocas area and its right-
hemisphere homologue might also be involved in the processing
of musical syntax, suggesting that these brain areas process con-
siderably less domain-specific syntactic information than previ-
ously believed. Like syntactic information of language, which is
fast and automatically processed in Brocas area and its right-
hemisphere homologue, music-syntactic information processed
in the same brain structures also seems to be processed auto-
. The magnetic fields of the mERAN were, in four of
six subjects (but not in the grand average), stronger over the right
than over the left hemisphere. This finding is consistent with the
ELAN, which is prevalently (although not consistently) stronger
over the left hemisphere. It is thus suggested here, as a working
hypothesis, that the left pars opercularis is more involved in the
processing of language syntax, and the right pars opercularis more
in the processing of musical syntax. However, both hemispheres
seem to be considerably activated in both domains.
In the present study, harmonically inappropriate chords
activated Brocas area and its right-hemisphere homologue.
This finding is important for several reasons. First, it demon-
strates that complex rule-based information is processed in
these areas with considerably less domain-specificity than pre-
viously believed
. This might suggest that these areas
process syntax, that is, complex rule-based information, in a
domain other than language. This finding might lead to new
investigations of syntax processing in the musical, or even other
auditory but non-linguistic domains. Second, it reveals from
a functional-neuroanatomical view a strong relationship
between the processing of language and music. This relation-
ship might at least partly account for influences of musical
training on verbal abilities
. Third, the present study intro-
duced a new method of investigating music perception using
Fig. 5. Grand average dipole solutions for P2m and mERAN. Grand aver-
age dipole solutions, yellow; P2m, top; mERAN, bottom. Each panel shows
left and right sagittal and axial (parallel to AC-PC line) view. Dipole solu-
tions for both the P2m and the mERAN refer to two-dipole configurations
(one dipole in each hemisphere). Blue discs, single subject solutions.
Table 1. Locations and strengths of P2m and mERAN
dipoles (grand average of back-transformed dipole
Dipole coordinates (x, y, z) and dipole moments (Q)
P2m left P2m right mERAN left mERAN right
(mean ± s.e.m.) (mean ± s.e.m.) (mean ± s.e.m.) (mean ± s.e.m.)
x (mm) –45 ± 2 51 ± 2 –48 ± 5 50 ± 3
y (mm) –16 ± 4 –19 ± 2 9 ± 4 6 ± 4
z (mm) 4 ± 2 4 ± 1 16 ± 1 14 ± 2
Q (nAm) 14 ± 5 22 ± 10 31 ± 15 35 ± 12
Values are given with respect to the Talairach coordinate system.
© 2001 Nature Publishing Group
© 2001 Nature Publishing Group
544 nature neuroscience • volume 4 no 5 • may 2001
MEG. Effects were elicited in ‘non-musicians’ (even though
Neapolitan chords were task-irrelevant), supporting the
hypothesis of an (implicit) musical ability of the human brain,
and enabling a broad generalization of the present findings.
Subjects. Six right-handed and normal-hearing subjects (20 to 27 years
old; mean, 22.5; 4 females) participated in the experiment. Subjects were
non-musicians, that is, they had never learned singing or an instrument,
and they did not have any special musical education besides normal
school education.
Stimuli. The pool of stimuli consisted of 128 different chord sequences;
each sequence consisted of five chords. The first chord was always the
tonic of the following chord sequence; chords at the second position
were tonic, mediant, submediant, subdominant, dominant of the dom-
inant, secondary dominant of mediant, secondary dominant of sub-
mediant or secondary dominant of supertonic. The third position chord
was subdominant, dominant, dominant six-four, Neapolitan sixth, or if
preceded by a secondary dominant-mediant, the submediant or super-
tonic. The fourth position chord was the dominant seventh, and the
fifth position chord was either the tonic or the Neapolitan sixth. Texture
of chords followed the classical theory of harmony
. From the pool of
128 sequences, 1350 chord sequences were randomly chosen such that
the secondary dominants, Neapolitan chords at the third position, and
Neapolitan chords at the fifth position of a sequence occurred with a
probability of 25% each. Presentation time for chords 1 to 4 was 600 ms,
and for chord 5, 1200 ms. In 10% of the sequences, an in-key chord
from position 2–5 was played by an instrument other than piano. Chord
sequences were presented in direct succession. The same stimuli were
used in experiment 1 of the preceding EEG study
Procedure. Three experimental sessions were conducted (each compris-
ing three blocks). Participants were only informed about the deviant
instruments, not about the Neapolitan chords or their nature. Partici-
pants were instructed to ignore the harmonies.
MEG recording. The continuous raw MEG was recorded using the 4D-
Neuroimaging Magnes WHS 2500 whole-head system (San Diego, Cal-
ifornia), which used 148 magnetometer channels, 11 magnetic reference
channels and four EOG channels. Signals were digitized with a band-
width of 0.1 Hz to 50 Hz and a sampling rate of 254.31 Hz. The contin-
uous MEG data were filtered off-line with a 2–10 Hz band-pass filter
(1001 points, FIR). All subjects’ averaged data were transformed onto a
sensor-position representative for all blocks of this subject using ASA
(ANT Software, Enschede, The Netherlands) and were accumulated per
subject across all blocks and sessions.
Data analysis. For each participant, a realistically shaped volume con-
ductor was constructed, scaled to the subject’s real head size. This was
achieved by adjusting the size of the Curry-Warped brain (an average
brain obtained from more than 100 subjects; Neuroscan Labs, Ster-
ling, Virginia, B. Maess and U. Oertel, Neuroimage, 10, A8, 1999) to
each individual head shape. This method results in independent scal-
ing factors for all three spatial dimensions. The adjustment procedure
thus enabled source localization with an accuracy close that achieved
with individual MR-based models. These scaling factors were also use-
ful for the transformation of localization results into the Talairach-
sized brain.
To achieve a higher signal-to-noise ratio, the ERFs evoked by all in-
key chords were combined (The magnetic field maps of the P1, N1 and
P2m virtually did not differ between in-key chords presented at differ-
ent positions within the chord-sequences.) Dipole orientations were sep-
arated into tangential and radial contributions for each subject. The radial
contributions were then eliminated. The criterion for an acceptable dipole
solution was the explanation of at least 90% of normalized variance for
each subject. The data of only two subjects fulfilled the criterion for the
mERAN elicited at the third position, so no grand-average of dipole
solutions was done in this case.
This work was supported by the Leibniz Science Prize awarded to A.D. Friederici
by the German Research Foundation.
Note: Examples of the stimuli are available on the Nature Neuroscience web site
1. Swain, J. Musical Languages (Norton, UK, 1997).
2. Sloboda, J. The Musical Mind: The Cognitive Psychology of Music (Oxford
Univ. Press, New York, 1985).
3. Lerdahl, F. & Jackendoff, R. A Generative Theory of Music (MIT Press,
Cambridge, Massachusetts, 1999).
4. Raffmann, D. Language, Music, and Mind (MIT Press, Cambridge,
Massachusetts, 1993).
5. Patel, A. D., Gibson, E., Ratner, J., Besson, M. & Holcomb, P. Processing
syntactic relations in language and music: an event-related potential study.
J. Cogn. Neurosci. 10, 717–733 (1998).
6. Koelsch, S., Gunter, T., Friederici, A. D. & Schröger, E. Brain indices of music
processing: ‘non-musicians’ are musical. J. Cogn. Neurosci. 12, 520–541
7. Krumhansl, C. & Kessler, E. Tracing the dynamic changes in perceived tonal
organization in a spatial representation of musical keys. Psychol. Rev. 89,
334–368 (1982).
8. Bharucha, J. & Krumhansl, C. The representation of harmonic structure in
music: hierarchies of stability as a function of context. Cognition 13, 63–102
9. Bharucha, J. & Stoeckig, K. Reaction time and musical expectancy: priming
of chords. J. Exp. Psychol. Hum. Percept. Perform. 12, 403–410 (1986).
10. Friederici, A. D., ed. Language Comprehension: A Biological Perspective
(Springer, Berlin, 1998).
11. Bharucha, J. Anchoring effects in music: the resolution of dissonance. Cognit.
Psychol.16, 485–518 (1984).
12. Bharucha, J. & Stoeckig, K. Priming of chords: spreading activation or
overlapping frequency spectra? Percept. Psychophys. 41, 519–524 (1987).
13. Clynes, M. in Average Evoked Potentials: Methods, Results and Evaluations
(eds. Donchin, E. & Lindsley, D.) 363–374 (US Government Printing Office,
Washington, DC, 1969).
14. Näätänen, R. Attention and Brain Function (Erlbaum, Hillsdale, New Jersey,
15. Liegeois-Chauvel, C., Musolino, A., Barier, J., Marquis, P. & Chauvel, P.
Evoked potentials recorded from the auditory cortex in man: evaluation and
topography of the middle latency hypothesis. Electroencephalogr. Clin.
Neurophysiol. 92, 204–214 (1994).
16. Mäkelä, J., Hämäläinen, M., Hari, R. & McEvoy, L. Whole-head mapping of
middle-latency auditory magnetic fields. Electroencephalogr. Clin.
Neurophysiol. 92, 414–421 (1994).
17. Pantev, C. et al. Specific tonotopic organizations of different areas of the
human auditory cortex revealed by simultaneous magnetic and electric
recordings. Electroencephalogr. Clin. Neurophysiol. 94, 26–40 (1995).
18. Hari, R., Aittoniemi, M., Jarvinen, M., Katila, T. & Varpula, T. Auditory
evoked transient and sustained magnetic fields of the human brain. Exp.
Brain Res. 40, 237–240 (1980).
19. Pantev, C., Hoke, M., Lütkenhöner, B. & Lehnertz, K. Tonotopic organization
of the auditory cortex: pitch versus frequency representation. Science 246,
486–488 (1989).
20. Pantev, C. et al. Identification of sources of brain neuronal activity with high
spatiotemporal resolution through combination of neuromagnetic source
localization (NMSL) and magnetic resonance imaging (MRI).
Electroencephalogr. Clin. Neurophysiol. 75, 173–184 (1990).
21. Zatorre, R., Evans, A., Meyer, E. & Gjedde, A. Lateralization of phonetic and
pitch discrimination in speech processing. Science 256, 846–849 (1992).
22. Auzou, P. et al. Topographic EEG activations during timbre and pitch
discrimination tasks using musical sounds. Neuropsychologia 33, 25–37
23. Levänen, S., Ahonen, A., Hari, R., McEvoy, L. & Sams, M. Deviant auditory
stimuli activate human left and right auditory cortex differently. Cereb.
Cortex 6, 288–296 (1996).
24. Tervaniemi, M. et al. Lateralized automatic auditory processing of phonetic
versus musical information: a PET study. Hum. Brain Mapp. 10, 74–79
25. Krumhansl, C., Bharucha, J. & Castellano, M. Key distance effects on
perceived harmonic structure in music. Percept. Psychophys. 32, 96–108
26. Krumhansl, C., Bharucha, J. & Kessler, E. Perceived harmonic structure of
chords in three related musical keys. J. Exp. Psychol. Hum. Percept. Perform. 8,
24–36 (1982).
27. Berent, I. & Perfetti, C. An on-line method in studying music parsing.
Cognition 46, 203–222 (1993).
© 2001 Nature Publishing Group
© 2001 Nature Publishing Group
nature neuroscience • volume 4 no 5 • may 2001 545
28. Alho, K. Cerebral generators of mismatch negativity (MMN) and its
magnetic counterpart (MMNm) elicited by sound changes. Ear Hear. 16,
38–51 (1995).
29. Rinne, T., Alho, K., Ilmoniemi, R., Virtanen, J. & Näätänen, R. Separate time
behaviors of the temporal and frontal mismatch negativity sources.
Neuroimage 12, 14–19 (2000).
30. Giard, M., Perrin, F. & Pernier, J. Brain generators implicated in processing of
auditory stimulus deviance. A topographic ERP study. Psychophysiology 27,
627–640 (1990).
31. Alain, C., Woods, D. L. & Knight, R. T. A distributed cortical network for
auditory sensory memory in humans. Brain Res. 812, 23–37 (1998).
32. Opitz, B., Mecklinger, A., von Cramon, D. Y. & Kruggel, F. Combining
electrophysiological and hemodynamic measures of the auditory oddball.
Psychophysiology 36, 142–147 (1999).
33. Caplan, D., Alpert, N. & Waters, G. Effects of syntactic and propositional
number on patterns of regional cerebral blood flow. J. Cogn. Neurosci. 10,
541–552 (1998).
34. Caplan, D., Alpert, N. & Waters, G. PET-studies of syntactic processing with
auditory sentence presentation. Neuroimage 9, 343–351 (1999).
35. Caplan, D., Alpert, N., Waters, G. & Olivieri, A. Activation of Broca’s area by
syntactic processing under condition of concurrent articulation. Hum. Brain
Mapp. 9, 65–71 (2000).
36. Dapretto, M. & Booheimer, S. Form and content: dissociating syntax and
semantics in sentence comprehension. Neuron 24, 427–432 (1999).
37. Ni, W. et al. An event-related neuroimaging study distinguishing form and
content in sentence processing. J. Cogn. Neurosci. 12, 120–133 (2000).
38. Friederici, A., Wang, Y., Herrmann, C., Maess, B. & Oertel, U. Localization of
early syntactic processes in frontal and temporal cortical areas: a
magnetoencephalographic study. Hum. Brain Mapp. 11, 1–11 (2000).
39. Just, M., Carpenter, P., Keller, T., Eddy, W. & Thulborn, K. Brain activation
modulated by sentence comprehension. Science 274, 114–116 (1996).
40. Meyer, M., Friederici, A. D. & von Cramon, D. Y. Neurocognition of
auditory sentence comprehension: event related fMRI reveals sensitivity
to syntactic violations and task demands. Cognit. Brain Res. 9, 19–33
41. Hahne, A. & Friederici, A. D. Electrophysiological evidence for two steps in
syntactic analysis: early automatic and late controlled processes. J. Cogn.
Neurosci. 11,194–205 (1999).
42. Koelsch, S., Schröger, E., Gunter, T. & Friederici, A. D. Differentiating ERAN
and MMN: an ERP-study. Neuroreport (in press).
43. Shaywitz, B. et al. Sex differences in the functional organization of the brain
for language. Nature 373, 607–609 (1995).
44. Chan, A. S., Ho, Y. C. & Cheung, M. C. Music training improves verbal
memory. Nature 396, 128 (1998).
45. Douglas, S. & Willatts, P. The relationship between musical ability and
literacy skills. J. Res. Reading 17, 99–107 (1994).
46. Hindemith, P. Unterweisung im Tonsatz, 1. Theoretischer Teil (Schott, Mainz,
© 2001 Nature Publishing Group
© 2001 Nature Publishing Group
... Dieselben Autoren (2020) bezweifeln in einer Erweiterung früherer Befunde grundsätzlich die Existenz von musikinduzierten Lerntransfereffekten und deuten frühere Befunde über verbesserte Schulleistungen als Artefakte, die durch kognitive Vorteile sowie spezifische Dispositionen von Kindern im Instrumentalunterricht erklärt werden könnten. Jentschke et al. (2005) dagegen vermuten, dass die Überlagerung von Verarbeitungsmechanismen für sprachliche und musikalische Strukturen im menschlichen Gehirn Lerntransfereffekte zwischen den Domänen durchaus begründen könnte (Jentschke & Koelsch, 2010;Maess et al., 2001;Miani & Gretenkort, 2016;Schön et al., 2004). Musiklernen könnte demnach mittel-und langfristig die sprachnahe auditive Verarbeitung positiv beeinflussen und dazu führen, Sprachmaterial nicht nur phonetisch, sondern auch auf syntaktischer Ebene nachhaltiger zu enkodieren (Besson & Schön, 2001). ...
... Dieselben Autoren (2020) bezweifeln in einer Erweiterung früherer Befunde grundsätzlich die Existenz von musikinduzierten Lerntransfereffekten und deuten frühere Befunde über verbesserte Schulleistungen als Artefakte, die durch kognitive Vorteile sowie spezifische Dispositionen von Kindern im Instrumentalunterricht erklärt werden könnten. Jentschke et al. (2005) dagegen vermuten, dass die Überlagerung von Verarbeitungsmechanismen für sprachliche und musikalische Strukturen im menschlichen Gehirn Lerntransfereffekte zwischen den Domänen durchaus begründen könnte (Jentschke & Koelsch, 2010;Maess et al., 2001;Miani & Gretenkort, 2016;Schön et al., 2004). Musiklernen könnte demnach mittel-und langfristig die sprachnahe auditive Verarbeitung positiv beeinflussen und dazu führen, Sprachmaterial nicht nur phonetisch, sondern auch auf syntaktischer Ebene nachhaltiger zu enkodieren (Besson & Schön, 2001). ...
... To justify the findings of this study, we can also refer to the noticeable advantage of songs in aiding memory in L2 learning. Based on the empirical findings of imaging research, the same area of the brain is involved in processing melodic patterns and language structures (Maess et al., 2001;Kerekes, 2015). Therefore, aligned with Abbott (2002), it can be argued that since the songs included the rhythmical arrangement of L2, they might have resulted in deeper processing and efficient incidental learning of the intended vocabulary. ...
Full-text available
This study explores the immediate and delayed impacts of songs on implicit vocabulary learning in terms of spoken-form recognition (SFR), form-meaning connection (FMC), and collocation recognition (CR) of Iranian intermediate female english as a foreign language (EFL) learners. For this purpose, a total of 150 female EFL learners, aged from 11 to 15, were selected from Iran Language Institute in Shahrekord City. The participants were randomly assigned into four experimental groups and two control groups. Two experimental groups and one control group took the pre-test, while for the other groups there was not any pre-test. Two experimental groups and one control group received immediate post-test whereas for the other groups there was not any immediate post-test. Nevertheless, all the groups took a delayed post-test. The data were obtained over five 90-min sessions. The control groups did not listen to any songs. Two experimental groups and one control group completed the immediate post-test. After 3 weeks, all the groups took the delayed post-test. The outcomes of a two-way ANOVA revealed that there was a statistically significant difference between the experimental group which received both pre-test and the treatment and the experimental group which received treatment but no pre-test. Additionally, the outcomes of a three-way ANOVA indicated that the experimental groups outperformed the control groups, giving rise to the conclusion that the treatment had been positively influential in improving the learners’ vocabulary learning. Finally, the outcomes of a one-way MANOVA showed that experimental groups and control groups performed differently with respect to SFR, FMC, and CR. Based on the findings, some implications are presented for different stakeholders.
... The supramarginal gyrus is considered to support pitch memory ( Schaal et al., 2017 ;Schaal et al., 2015 ), thus playing an integral role in perceiving familiar melodies. The right IFG, on the other hand, is part of the prosodic network ( Sammler et al., 2015 ), yet also plays a crucial role for "musical syntax ", i.e., the processing of non-local, structural and hierarchical dependencies between tones of a melody ( Bianco et al., 2016 ;Cheung et al., 2018 ;Koelsch, 2006Koelsch, , 2011Kunert et al., 2015 ;Maess et al., 2001 ;Patel, 2005 ). Again, we interpret this finding as evidence that melodic properties of songs can well be captured by pitch autocorrelations in a neurobiologically plausible way. ...
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, while song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs, while continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on processing their melodic properties in bilateral temporal auditory areas and an additional right fronto-temporal network supporting the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music processing in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties in both domains.
... In neuroscience, support for absolute-surprise effects comes from studies of release of dopamine in musical sections known to evoke "chills" (Salimpoor et al., 2011). As for the contrastive-surprise effect, neuroscience support comes from multiple directions (Patel et al., 1998;Koelsch et al., 2001;Maess et al., 2001;Tillmann et al., 2003). They reveal that the brain computes harmonically unexpected events in music similarly to language syntactic errors. ...
Full-text available
Obtaining information from the world is important for survival. The brain, therefore, has special mechanisms to extract as much information as possible from sensory stimuli. Hence, given its importance, the amount of available information may underlie aesthetic values. Such information-based aesthetic values would be significant because they would compete with others to drive decision-making. In this article, we ask, “What is the evidence that amount of information support aesthetic values?” An important concept in the measurement of informational volume is entropy. Research on aesthetic values has thus used Shannon entropy to evaluate the contribution of quantity of information. We review here the concepts of information and aesthetic values, and research on the visual and auditory systems to probe whether the brain uses entropy or other relevant measures, specially, Fisher information, in aesthetic decisions. We conclude that information measures contribute to these decisions in two ways: first, the absolute quantity of information can modulate aesthetic preferences for certain sensory patterns. However, the preference for volume of information is highly individualized, with information-measures competing with organizing principles, such as rhythm and symmetry. In addition, people tend to be resistant to too much entropy, but not necessarily, high amounts of Fisher information. We show that this resistance may stem in part from the distribution of amount of information in natural sensory stimuli. Second, the measurement of entropic-like quantities over time reveal that they can modulate aesthetic decisions by varying degrees of surprise given temporally integrated expectations. We propose that amount of information underpins complex aesthetic values, possibly informing the brain on the allocation of resources or the situational appropriateness of some cognitive models.
Recent statistical studies have suggested a relationship between increased harmonic surprise and music preference. Conclusive behavioral evidence to establish this relationship is still lacking. We set out to address this gap through a behavioral study using computer-generated stimuli designed to differ only in contrastive and absolute harmonic surprise. We produced the stimuli with both experimental control and ecological validity in mind by engaging the help of studio musicians. The stimuli were rated for preference by 84 participants (44 female, 40 male) between 18 to 65 years old. Participants rated items featuring moderately increased absolute and contrastive surprise significantly higher than items with lower harmonic surprise. This effect applied only to levels of surprise within a range typically found in popular music, however. Excessive surprises did not yield an increase in preference. We discuss different mechanisms of consistency and how they may mediate the selection of neural strategies leading to preference formation. These findings provide evidence of a causal behavioral relationship between harmonic surprise and music preference.
Background Neurocognitive models of language processing highlight the role of the left inferior frontal gyrus (IFG) in the functional network underlying language. Furthermore, neuroscience research has shown that IFG is not a uniform region anatomically, cytoarchitectonically or functionally. However, no previous study explored the language-related functional connectivity patterns of IFG subdivisions using a meta-analytic connectivity modeling (MACM) approach. Purpose The present MACM study aimed to identify language-related coactivation patterns of the left and right IFG subdivisions. Method Six regions of interest (ROIs) were defined using a probabilistic brain atlas corresponding to pars opercularis, pars triangularis and pars orbitalis of IFG in both hemispheres. The ROIs were used to search the BrainMap functional database to identify neuroimaging experiments with healthy, right-handed participants reporting language-related activations in each ROI. Activation likelihood estimation analyses were then performed on the foci extracted from the identified studies to compute functional convergence for each ROI, which was also contrasted with the other ROIs within the same hemisphere. Results A primarily left-lateralized functional network was revealed for the left and right IFG subdivisions. The left-hemispheric ROIs exhibited more robust coactivation than the right-hemispheric ROIs. Particularly, the left pars opercularis was associated with the most extensive coactivation pattern involving bilateral frontal, bilateral parietal, left temporal, left subcortical, and right cerebellar regions, while the left pars triangularis and orbitalis revealed a predominantly left-lateralized involvement of frontotemporal regions. Conclusion The findings align with the neurocognitive models of language processing that propose a division of labor among the left IFG subdivisions and their respective functional networks. Also, the opercular part of left IFG stands out as a major hub in the language network with connections to diverse cortical, subcortical and cerebellar structures.
Music is ubiquitous across human cultures — as a source of affective and pleasurable experience, moving us both physically and emotionally — and learning to play music shapes both brain structure and brain function. Music processing in the brain — namely, the perception of melody, harmony and rhythm — has traditionally been studied as an auditory phenomenon using passive listening paradigms. However, when listening to music, we actively generate predictions about what is likely to happen next. This enactive aspect has led to a more comprehensive understanding of music processing involving brain structures implicated in action, emotion and learning. Here we review the cognitive neuroscience literature of music perception. We show that music perception, action, emotion and learning all rest on the human brain’s fundamental capacity for prediction — as formulated by the predictive coding of music model. This Review elucidates how this formulation of music perception and expertise in individuals can be extended to account for the dynamics and underlying brain mechanisms of collective music making. This in turn has important implications for human creativity as evinced by music improvisation. These recent advances shed new light on what makes music meaningful from a neuroscientific perspective. People may respond to listening to music by physically moving or feeling emotions. In this Review, Peter Vuust and colleagues discuss how music perception and related actions, emotions and learning are associated with the predictive capabilities of the human brain, with a focus on their predictive coding of music model.
In natural listening situations, speech perception is often impaired by degraded speech sounds arriving at the ear. Contextual speech information can improve the perception of degraded speech and modify neuronal responses elicited by degraded speech. However, most studies on context effects on neural responses to degraded speech confounded lexico-semantic and sublexical cues. Here, we used fMRI to investigate how prior sublexical speech (e.g. pseudowords cues) affects neural responses to degraded sublexical speech and hence its processing and recognition. Each trial consisted of three consecutively presented pseudowords, of which the first and third were identical and degraded. The second pseudoword was always presented in clear form and either matched or did not match the degraded pseudowords. Improved speech processing through sublexical processing was associated with BOLD activation increases in frontal, temporal, and parietal regions, including the primary auditory cortex (PAC), posterior superior temporal cortex, angular gyrus, supramarginal gyrus, middle temporal cortex, and somato-motor cortex. These brain regions are part of a speech processing network and are involved in lexico-semantic processing. To further investigate the adaptive changes in PAC, we conducted a bilateral region of interest analysis on PAC subregions. PAC ROIs showed bilaterally increased activation in the match condition compared with the mismatch condition. Our results show that the perception of unintelligible degraded speech is improved and the neuronal population response is enhanced after exposure to intact sublexical cues. Furthermore, our findings indicate that the processing of clear meaningless sublexical speech preceding degraded speech could enhance the activity in the brain regions that belong to the cortical speech processing network previously reported in studies investigating lexico-semantic speech.
Full-text available
Cerebral activation was measured with positron emission tomography in ten human volunteers. The primary auditory cortex showed increased activity in response to noise bursts, whereas acoustically matched speech syllables activated secondary auditory cortices bilaterally. Instructions to make judgments about different attributes of the same speech signal resulted in activation of specific lateralized neural systems. Discrimination of phonetic structure led to increased activity in part of Broca's area of the left hemisphere, suggesting a role for articulatory recoding in phonetic perception. Processing changes in pitch produced activation of the right prefrontal cortex, consistent with the importance of right-hemisphere mechanisms in pitch perception.
Full-text available
Research has shown that a relationship exists between phonological awareness and literary skills. It has been suggested that a structured programme of musical activities can be used to help children develop a multi-sensory awareness and response to sounds. The relationship between musical ability and literacy skills was examined in a study that showed an association between rhythmic ability and reading. A further pilot intervention study showed that training in musical skills is a valuable additional strategy for assisting children with reading difficulties.
This document presents the proceedings of a conference sponsored by the National Aeronautics and Space Administration and the American Institute for Biological Sciences. The conference was held in San Francisco in September 1968 to discuss current problems in the study of average evoked potential. As can be seen from the list of participants, most laboratories, in this country and abroad, that actively use signal-averaging techniques in processing electroencephalographic records were represented at the conference. Our objective in organizing this conference was to provide a forum for discussing the problems involved in conducting these studies and in communicating the results of experiments. For this purpose, the conference was organized in the following format. Six investigators were invited to prepare critical reviews of the literature—each on one of six assigned topics. The reviews were made available to all the conference participants 4 to 6 weeks before the conference. Each review was to serve as the text for one 3-hour session at the conference. The reviewer was allotted 20 minutes to restate some of the main points presented in his paper; then discussion was opened to all participants. The discussions were moderated in each case by an assigned discussant. Chapters 2 through 7 present the review papers and the ensuing discussion. The remarks made by the reviewer were deleted since their substance is presented in the review. All the principal speakers completed their assignment on time, and the reviews were sent to the participants. However Dr. Vaughan's report, as included in this volume, is substantially different from the document that he circulated to the participants. For this reason, the discussants ignore much of the material presented in his present chapter. In addition to the working sessions of the conference, two evening sessions featured extended presentations. In the first one, Dr. Lindsley surveyed the evoked potential technique, its history, and achievements; in the second one, Dr. Frank Morrell discussed the neurophysiological mechanisms underlying the average evoked response. Dr. Lindsley's talk provided the material for Chapter 1. A supplement contains reports that were submitted by participants to expand and elaborate upon some of the comments they made in the discussion. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
The neurophysiological mechanisms underlying mismatch negativity (MMN) can be inferred from an examination of some of the brain generators involved in the process of this event-related potential (ERP) component. ERPs were recorded in two studies in which the subjects were involved in a selective dichotic listening task. Subjects were required to silently count rare stimuli deviating in pitch from a sequence of standard stimuli in one ear, while ignoring all the stimuli (standards and deviants) delivered randomly to the other ear. The results showed that, in all cases, the negative wave elicited by the deviant stimuli showed the highest amplitudes over the right hemiscalp irrespective of the ear of stimulation or the direction of attention. Scalp radial current density analysis showed that this asymmetric potential distribution could be attributed to the sum of activities of two sets of neural generators: one temporal, located in the vicinity of the primary auditory cortex, predominantly activated in the hemisphere contralateral to the ear of stimulation, and the other frontal, involving mainly the right hemisphere. The results are discussed in light of Näätänen's model: we suggest the dissociation of two functional processes on the basis of activity of distinct brain areas: a sensory memory mechanism related to the temporal generators, and an automatic attention-switching process related to the frontal generators.
The neural mechanisms of deviancy and target detection were investigated by combining high density event-related potential (ERP) recordings with functional magnetic resonance imaging (fMRI). ERP and fMRI responses were recorded using the same paradigm and the same subjects. Unattended deviants elicited a mismatch negativity (MMN) in the ERP. In the fMRI data, activations of transverse/superior temporal gyri bilateral were found. Attended deviants generated an MMN followed by an N2/P3b complex. For this condition, fMRI activations in both superior temporal gyri and the neostriatum were found. These activations were taken as neuroanatomical constraints for the localization of equivalent current dipoles. Inverse solutions for dipole orientation provide evidence for significant activation close to Heschl's gyri during deviancy processing in the 110–160-ms time interval (MMN), whereas target detection could be modeled by two dipoles in the superior temporal gyrus between 320 and 380 ms.