Musical imagery: sound of silence activates auditory cortex.
ABSTRACT Auditory imagery occurs when one mentally rehearses telephone numbers or has a song 'on the brain'--it is the subjective experience of hearing in the absence of auditory stimulation, and is useful for investigating aspects of human cognition. Here we use functional magnetic resonance imaging to identify and characterize the neural substrates that support unprompted auditory imagery and find that auditory and visual imagery seem to obey similar basic neural principles.
- SourceAvailable from: PubMed Central[Show abstract] [Hide abstract]
ABSTRACT: Audiovisual (AV) speech integration is often studied using the McGurk effect, where the combination of specific incongruent auditory and visual speech cues produces the perception of a third illusory speech percept. Recently, several studies have implicated the posterior superior temporal sulcus (pSTS) in the McGurk effect; however, the exact roles of the pSTS and other brain areas in "correcting" differing AV sensory inputs remain unclear. Using functional magnetic resonance imaging (fMRI) in ten participants, we aimed to isolate brain areas specifically involved in processing congruent AV speech and the McGurk effect. Speech stimuli were composed of sounds and/or videos of consonant-vowel tokens resulting in four stimulus classes: congruent AV speech (AVCong), incongruent AV speech resulting in the McGurk effect (AVMcGurk), acoustic-only speech (AO), and visual-only speech (VO). In group- and single-subject analyses, left pSTS exhibited significantly greater fMRI signal for congruent AV speech (i.e., AVCong trials) than for both AO and VO trials. Right superior temporal gyrus, medial prefrontal cortex, and cerebellum were also identified. For McGurk speech (i.e., AVMcGurk trials), two clusters in the left posterior superior temporal gyrus (pSTG), just posterior to Heschl's gyrus or on its border, exhibited greater fMRI signal than both AO and VO trials. We propose that while some brain areas, such as left pSTS, may be more critical for the integration of AV speech, other areas, such as left pSTG, may generate the "corrected" or merged percept arising from conflicting auditory and visual cues (i.e., as in the McGurk effect). These findings are consistent with the concept that posterior superior temporal areas represent part of a "dorsal auditory stream," which is involved in multisensory integration, sensorimotor control, and optimal state estimation (Rauschecker and Scott, 2009).Frontiers in Psychology 01/2014; 5:534. · 2.80 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: Music is commonly used to facilitate or support movement, and increasingly used in movement rehabilitation. Additionally, there is some evidence to suggest that music imagery, which is reported to lead to brain signatures similar to music perception, may also assist movement. However, it is not yet known whether either imagined or musical cueing changes the way in which the motor system of the human brain is activated during simple movements. Here, functional magnetic resonance imaging was used to compare neural activity during wrist flexions performed to either heard or imagined music with self-pacing of the same movement without any cueing. Focusing specifically on the motor network of the brain, analyses were performed within a mask of BA4, BA6, the basal ganglia (putamen, caudate, and pallidum), the motor nuclei of the thalamus, and the whole cerebellum. Results revealed that moving to music compared with self-paced movement resulted in significantly increased activation in left cerebellum VI. Moving to imagined music led to significantly more activation in pre-supplementary motor area (pre-SMA) and right globus pallidus, relative to self-paced movement. When the music and imagery cueing conditions were contrasted directly, movements in the music condition showed significantly more activity in left hemisphere cerebellum VII and right hemisphere and vermis of cerebellum IX, while the imagery condition revealed more significant activity in pre-SMA. These results suggest that cueing movement with actual or imagined music impacts upon engagement of motor network regions during the movement, and suggest that heard and imagined cues can modulate movement in subtly different ways. These results may have implications for the applicability of auditory cueing in movement rehabilitation for different patient populations.Frontiers in Human Neuroscience 01/2014; 8:774. · 2.91 Impact Factor
- [Show abstract] [Hide abstract]
ABSTRACT: The comparator account holds that processes of motor prediction contribute to the sense of agency by attenuating incoming sensory information and that disruptions to this process contribute to misattributions of agency in schizophrenia. Over the last 25 years this simple and powerful model has gained widespread support not only as it relates to bodily actions but also as an account of misattributions of agency for inner speech, potentially explaining the etiology of auditory verbal hallucination (AVH). In this paper we provide a detailed analysis of the traditional comparator account for inner speech, pointing out serious problems with the specification of inner speech on which it is based and highlighting inconsistencies in the interpretation of the electrophysiological evidence commonly cited in its favor. In light of these analyses we propose a new comparator account of misattributed inner speech. The new account follows leading models of motor imagery in proposing that inner speech is not attenuated by motor prediction, but rather derived directly from it. We describe how failures of motor prediction would therefore directly affect the phenomenology of inner speech and trigger a mismatch in the comparison between motor prediction and motor intention, contributing to abnormal feelings of agency. We argue that the new account fits with the emerging phenomenological evidence that AVHs are both distinct from ordinary inner speech and heterogeneous. Finally, we explore the possibility that the new comparator account may extend to explain disruptions across a range of imagistic modalities, and outline avenues for future research.Frontiers in Human Neuroscience 08/2014; 8:675. · 2.91 Impact Factor
Sound of silence
activates auditory cortex
subjective experience of hearing in the
absence of auditory stimulation, and is use-
ful for investigating aspects of human cog-
nition1. Here we use functional magnetic
resonance imaging to identify and charac-
terize the neural substrates that support
unprompted auditory imagery and find
that auditory and visual imagery seem to
obey similar basic neural principles.
The few studies that have examined the
topic ofauditory imagery2–5have focused on
the neural substrates ofdirected imagery (for
example, “imagine a tone”). What is not
known,however,is whether similar principles
guide the more pervasive and spontaneous
forms of imagery that punctuate everyday
life.We used functional magnetic resonance
imaging to investigate the recruitment of
auditory cortex during spontaneous auditory
imagery ofexcerpts ofpopular music.
During scanning, subjects passively lis-
tened to excerpts of songs with lyrics (for
example, Satisfaction by the Rolling Stones)
and to instrumentals that contained no lyrics
(for example,the theme from ThePink Pan-
ther). Each piece of music was pre-rated by
subjects as either familiar or unknown,and a
unique soundtrack was created for each indi-
vidual. Short sections of music (lasting for
uditory imagery occurs when one
mentally rehearses telephone numbers
or has a song ‘on the brain’ — it is the
Our findings offer a neural basis for the
spontaneous and sometimes vexing experi-
ence of hearing a familiar melody in one’s
head.Whereas previous investigations have
explicitly directed subjects to imagine a
specific auditory experience2–4,we provided
no instruction. Instead, simply muting
short gaps offamiliar music was sufficient to
trigger auditory imagery — a finding that
indicates the obligatory nature of this phe-
nomenon. Corroborating this observation,
all subjects reported subjectively hearing
a continuation ofthe familiar songs,but not
of the unfamiliar songs, during the gaps in
We note also that the extent of neural
activity in the primary auditory cortex was
determined by the linguistic features of
the imagined experience. When semantic
knowledge (that is, lyrics) could be used to
generate the missing information, recon-
struction terminated in auditory association
areas. When this meaning-based route to
reconstruction was unavailable (as in instru-
mentals), activity extended to lower-level
regions of the auditory cortex,most notably
the primary auditory cortex (Fig.1b,d).
These findings parallel those in the
domain ofvisual imagery.For example,visual
imagery elicited when considering names of
objects (known as figural imagery) does not
rely on the primary visual cortex6,7.As these
‘low-resolution’images do not demand fine-
grained perceptual processing, activity in
visual-association areas is sufficient to
reconstruct the relevant representation. By
contrast, when semantic information is
absent or irrelevant (known as depictive
imagery), a ‘high-resolution’ perceptual
image is needed to reconstruct a representa-
tion,hence activity extends into the primary
visual cortex8. Our results provide evidence
that auditory imagery obeys the same basic
David J.M.Kraemer*,C.Neil Macrae*†,
Adam E.Green*,William M.Kelley*
*Department of Psychological and Brain Sciences,
Dartmouth College, Hanover, New Hampshire
†School of Psychology, University of Aberdeen,
Aberdeen AB24 2UB, UK
1. Kosslyn,S.M.,Ganis,G.& Thompson,W.L. Nature Rev.
Neurosci. 2, 635–642 (2001).
2. Halpern, A. R. & Zatorre, R. J. Cereb. Cortex 9, 697–704
3. Wheeler, M. E., Petersen, S. E. & Buckner, R. L.Proc. Natl Acad.
Sci. USA 97, 11125–11129 (2000).
4. Yoo, S. S., Lee, C. U. & Choi, B. G.Neuroreport 12, 3045–3049
5. Hughes, H. C. et al. Neuroimage 13, 1073–1089 (2001).
6. Fletcher,P.C. et al. Neuroimage 2, 195–200 (1995).
7. Kosslyn, S. M. & Thompson,W. L.Psychol. Bull. 129, 723–746
8. Kosslyn,S.M.,Thompson,W.L.,Kim,I.J.& Alpert,N.M.
Nature 378, 496–498 (1995).
9. Van Essen,D.C. et al. J. Am. Med. Inform. Assoc. 41, 1359–1378
Supplementary information accompanies this communication on
Competing financial interests:declared none.
NATURE|VOL 434|10 MARCH 2005|www.nature.com/nature
2–5s) were extracted at different points dur-
ing the soundtrack and replaced with silent
gaps.We then monitored the neural activity
in subjects that occurred during these gaps.
(For details of methods, see supplementary
Brain activity in the primary auditory
cortex and in the auditory association cortex
(Brodmann’s area 22) (Fig. 1a) was com-
pared during gaps of silence in familiar and
unknown songs.The results revealed a func-
tional dissociation within the left auditory
cortex (region?music-type interaction:
F[1,14]?48.92, P?0.0001; Fig. 1b). Silent
gaps embedded in familiar songs induced
greater activation in auditory association
areas than did silent gaps embedded in
unknown songs (Fig.1b);this was true for gaps
in songs with lyrics (F[1,14]?5.46,P?0.05;
Fig.1c) and without lyrics (F[1,14]?11.56,
songs contained no lyrics, cortical activity
extended into the left primary auditory
We confirmed that these effects were
uniquely attributable to the gaps ofsilence in
the music, rather than simply the result of
differences in activation in response to hear-
ing different music categories. By contrast
with the gap responses,listening to unknown
songs produced greater activity in auditory
association areas than did familiar songs
(lyrics: F[1,14]?11.24, P?0.005; instru-
mentals: F[1,14]?31.74, P?0.0001), and
activity in the primary auditory cortex
did not differ as a function of familiarity
(see supplementary information).
FLFI UL UI
Figure 1 Auditory cortex activation during silent gaps in music. a, An inflated rendering of the left hemisphere9illustrates primary auditory
cortex (PAC; red) and auditory association cortex, also known as Brodmann’s area 22 (green). The superior temporal sulcus (STS) and
inferior temporal sulcus (ITS) are indicated for reference. b, Signal change (arbitrary units) in PAC (red) and Brodmann’s area 22 (green)
during gaps in familiar songs with lyrics (FL), familiar instrumentals (FI), unknown songs with lyrics (UL) and unknown instrumentals (UI).
Error bars denote s.e.m. c, d, Difference in activity, which is greater for familiar songs, during silent gaps embedded in songs with (c) and
without (d) lyrics,projected on to flattened views of the left temporal lobe.Dark-grey regions represent sulci; lighter grey regions denote gyri.