Article

Steady-state responses in MEG demonstrate information integration within but not across the auditory and visual senses.

Max Planck Institute for Biological Cybernetics, Tuebingen, Germany.
NeuroImage (Impact Factor: 6.25). 01/2012; 60(2):1478-89. DOI:10.1016/j.neuroimage.2012.01.114
Source: PubMed

ABSTRACT To form a unified percept of our environment, the human brain integrates information within and across the senses. This MEG study investigated interactions within and between sensory modalities using a frequency analysis of steady-state responses that are elicited time-locked to periodically modulated stimuli. Critically, in the frequency domain, interactions between sensory signals are indexed by crossmodulation terms (i.e. the sums and differences of the fundamental frequencies). The 3 × 2 factorial design, manipulated (1) modality: auditory, visual or audiovisual (2) steady-state modulation: the auditory and visual signals were modulated only in one sensory feature (e.g. visual gratings modulated in luminance at 6 Hz) or in two features (e.g. tones modulated in frequency at 40 Hz & amplitude at 0.2 Hz). This design enabled us to investigate crossmodulation frequencies that are elicited when two stimulus features are modulated concurrently (i) in one sensory modality or (ii) in auditory and visual modalities. In support of within-modality integration, we reliably identified crossmodulation frequencies when two stimulus features in one sensory modality were modulated at different frequencies. In contrast, no crossmodulation frequencies were identified when information needed to be combined from auditory and visual modalities. The absence of audiovisual crossmodulation frequencies suggests that the previously reported audiovisual interactions in primary sensory areas may mediate low level spatiotemporal coincidence detection that is prominent for stimulus transients but less relevant for sustained SSR responses. In conclusion, our results indicate that information in SSRs is integrated over multiple time scales within but not across sensory modalities at the primary cortical level.

0 0
 · 
0 Bookmarks
 · 
48 Views
  • International journal of psychophysiology: official journal of the International Organization of Psychophysiology 06/2013; · 3.05 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Presentation of a face stimulus for several seconds at a periodic frequency rate leads to a right occipito-temporal evoked steady-state visual potential (SSVEP) confined to the stimulation frequency band. According to recent evidence (Rossion and Boremanse, 2011), this face-related SSVEP is largely reduced in amplitude when the exact same face is repeated at every stimulation cycle as compared to the presentation of different individual faces. Here this SSVEP individual face repetition effect was tested in 20 participants stimulated with faces at a 4Hz rate for 84s, in 4 conditions: faces upright or inverted, normal or contrast-reversed (2×2 design). To study the temporal dynamics of this effect, all stimulation sequences started with 15s of identical faces, after which, in half of the sequences, different faces were introduced. A larger response to different than identical faces at the fundamental (4Hz) and second harmonic (8Hz) components was observed for upright faces over the right occipito-temporal cortex. Weaker effects were found for inverted and contrast-reversed faces, two stimulus manipulations that are known to greatly affect the perception of facial identity. Addition of the two manipulations further decreased the effect. The phase of the fundamental frequency SSVEP response was delayed for inverted and contrast-reversed faces, to the same extent as the latency delay observed at the peak of the face-sensitive N170 component observed at stimulation sequence onset. Time-course analysis of the entire sequence of stimulation showed an immediate increase of 4Hz amplitude at the onset (16th second) of different face presentation, indicating a fast, large and frequency-specific release to individual face adaptation in the human brain. Altogether, these observations increase our understanding of the characteristics of the human steady-state face potential response and provide further support for the interest of this approach in the study of the neurofunctional mechanisms of face perception.
    NeuroImage 08/2012; 63(3):1585-600. · 6.25 Impact Factor
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Sounds can modulate visual perception as well as neural activity in retinotopic cortex. Most studies in this context investigated how sounds change neural amplitude and oscillatory phase reset in visual cortex. However, recent studies in macaque monkeys show that congruence of audio-visual stimuli also modulates the amount of stimulus information carried by spiking activity of primary auditory and visual neurons. Here, we used naturalistic video stimuli and recorded the spatial patterns of functional MRI signals in human retinotopic cortex to test whether the discriminability of such patterns varied with the presence and congruence of co-occurring sounds. We found that incongruent sounds significantly impaired stimulus decoding from area V2 and there was a similar trend for V3. This effect was associated with reduced inter-trial reliability of patterns (i.e. higher levels of noise), but was not accompanied by any detectable modulation of overall signal amplitude. We conclude that sounds modulate naturalistic stimulus encoding in early human retinotopic cortex without affecting overall signal amplitude. Subthreshold modulation, oscillatory phase reset and dynamic attentional modulation are candidate neural and cognitive mechanisms mediating these effects.
    NeuroImage 01/2013; · 6.25 Impact Factor

Full-text

View
18 Downloads
Available from
Dec 9, 2013