Steady-state responses in MEG demonstrate information integration within but not across the auditory and visual senses

Max Planck Institute for Biological Cybernetics, Tuebingen, Germany.
NeuroImage (Impact Factor: 6.36). 01/2012; 60(2):1478-89. DOI: 10.1016/j.neuroimage.2012.01.114
Source: PubMed


To form a unified percept of our environment, the human brain integrates information within and across the senses. This MEG study investigated interactions within and between sensory modalities using a frequency analysis of steady-state responses that are elicited time-locked to periodically modulated stimuli. Critically, in the frequency domain, interactions between sensory signals are indexed by crossmodulation terms (i.e. the sums and differences of the fundamental frequencies). The 3 × 2 factorial design, manipulated (1) modality: auditory, visual or audiovisual (2) steady-state modulation: the auditory and visual signals were modulated only in one sensory feature (e.g. visual gratings modulated in luminance at 6 Hz) or in two features (e.g. tones modulated in frequency at 40 Hz & amplitude at 0.2 Hz). This design enabled us to investigate crossmodulation frequencies that are elicited when two stimulus features are modulated concurrently (i) in one sensory modality or (ii) in auditory and visual modalities. In support of within-modality integration, we reliably identified crossmodulation frequencies when two stimulus features in one sensory modality were modulated at different frequencies. In contrast, no crossmodulation frequencies were identified when information needed to be combined from auditory and visual modalities. The absence of audiovisual crossmodulation frequencies suggests that the previously reported audiovisual interactions in primary sensory areas may mediate low level spatiotemporal coincidence detection that is prominent for stimulus transients but less relevant for sustained SSR responses. In conclusion, our results indicate that information in SSRs is integrated over multiple time scales within but not across sensory modalities at the primary cortical level.

Download full-text


Available from: Paolo Belardinelli, Oct 01, 2015
66 Reads
    • "Finally, like other previous studies (Giani et al. 2012), we did not identify cross- 554 modulation frequencies between the concurrently presented visual and tactile stimuli. This 555 raises the question of whether multisensory integration processes can actually be tagged using 556 this approach, as only a few studies have shown evidence for the emergence of such cross- 557 modulation SS-EPs across modalities (Nozaradan, Zerouali, Peretz, & Mouraux, 2015; M. P. 558 Regan, et al., 1995). "
    [Show abstract] [Hide abstract]
    ABSTRACT: The sustained periodic modulation of a stimulus induces an entrainment of cortical neurons responding to the stimulus, appearing as a steady-state evoked potential (SS-EP) in the EEG frequency spectrum. Here, we used frequency tagging of SS-EPs to study the crossmodal links in spatial attention between touch and vision. We hypothesized that a visual stimulus approaching the left or right hand orients spatial attention toward the approached hand, and thereby enhances the processing of vibrotactile input originating from that hand. Twenty-five subjects took part in the experiment: 16-s trains of vibrotactile stimuli (4.2 and 7.2 Hz) were applied simultaneously to the left and right hand, concomitantly with a punctate visual stimulus blinking at 9.8 Hz. The visual stimulus was approached toward the left or right hand. The hands were either uncrossed (left and right hands to the left and right of the participant) or crossed (left and right hands to the right and left of the participant). The vibrotactile stimuli elicited two distinct SS-EPs with scalp topographies compatible with activity in the contralateral primary somatosensory cortex. The visual stimulus elicited a third SS-EP with a topography compatible with activity in visual areas. When the visual stimulus was over one of the hands, the amplitude of the vibrotactile SS-EP elicited by stimulation of that hand was enhanced, regardless of whether the hands were uncrossed or crossed. This demonstrates a crossmodal effect of spatial attention between vision and touch, integrating proprioceptive and/or visual information to map the position of the limbs in external space. © 2015 Society for Psychophysiological Research.
    Psychophysiology 08/2015; DOI:10.1111/psyp.12511 · 3.18 Impact Factor
    • "Crucially, SSRs provide an index of relative attentional allocation to specific stimuli because attention modulates SSR amplitudes in visual (Müller et al. 1998, 2003; Kim et al. 2007), auditory (Ross et al. 2004; Bidet-Caulet et al. 2007; Saupe et al. 2009b) and audio-visual stimulus situations (Saupe et al. 2009a; Keitel et al. 2011, 2013). Frequency-tagging studies have also investigated effects of audio-visual synchrony on SSRs (Jenkins et al. 2011; Giani et al. 2012). Nozaradan et al. (2012) demonstrated enhanced amplitudes (and inter-trial phase coherence) of SSRs driven by an auditory and a visual stimulus when both obeyed a synchronous presentation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Our brain relies on neural mechanisms of selective attention and converging sensory processing to efficiently cope with rich and unceasing multisensory inputs. One prominent assumption holds that audio-visual synchrony can act as a strong attractor for spatial attention. Here, we tested for a similar effect of audio-visual synchrony on feature-selective attention. We presented two superimposed Gabor patches that differed in colour and orientation. On each trial, participants were cued to selectively attend to one of the two patches. Over time, spatial frequencies of both patches varied sinusoidally at distinct rates (3.14 and 3.63 Hz), giving rise to pulse-like percepts. A simultaneously presented pure tone carried a frequency modulation at the pulse rate of one of the two visual stimuli to introduce audio-visual synchrony. Pulsed stimulation elicited distinct time-locked oscillatory electrophysiological brain responses. These steady-state responses were quantified in the spectral domain to examine individual stimulus processing under conditions of synchronous versus asynchronous tone presentation and when respective stimuli were attended versus unattended. We found that both, attending to the colour of a stimulus and its synchrony with the tone, enhanced its processing. Moreover, both gain effects combined linearly for attended in-sync stimuli. Our results suggest that audio-visual synchrony can attract attention to specific stimulus features when stimuli overlap in space.
    Experimental Brain Research 08/2015; DOI:10.1007/s00221-015-4392-8 · 2.04 Impact Factor
  • Source
    • "In contrast, biological noise is distributed throughout the EEG spectrum, resulting in a SNR in the bandwidth of interest that can be very high (Regan, 1989; Rossion, 2014). Note that such long duration windows have been used in a number of previous studies (e.g., Chen et al., 2003; Di Russo and Spinelli, 2002; Giani et al., 2012; Srinivasan et al., 1999; Sutoyo and Srinivasan, 2009; Zemon and Ratliff, 1984) and the combination of few trials with a long duration has been used in all our previous studies with face stimuli (for review see Rossion (2014)). Three conditions were compared. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Despite decades of research on reading, including the relatively recent contributions of neuroimaging and electrophysiology, identifying selective representations of whole visual words (in contrast to pseudowords) in the human brain remains challenging, in particular without an explicit linguistic task. Here we measured discrimination responses to written words by means of electroencephalography (EEG) during fast periodic visual stimulation. Sequences of pseudofonts, nonwords, or pseudowords were presented through sinusoidal contrast modulation at a periodic 10 Hz frequency rate (F), in which words were interspersed at regular intervals of every fifth item (i.e., F/5, 2 Hz). Participants monitored a central cross color change and had no linguistic task to perform. Within only 3 min of stimulation, a robust discrimination response for words at 2 Hz (and its harmonics, i.e., 4 and 6 Hz) was observed in all conditions, located predominantly over the left occipito-temporal cortex. The magnitude of the response was largest for words embedded in pseudofonts, and larger in nonwords than in pseudowords, showing that list context effects classically reported in behavioral lexical decision tasks are due to visual discrimination rather than decisional processes. Remarkably, the oddball response was significant even for the critical words/pseudowords discrimination condition in every individual participant. A second experiment replicated this words/pseudowords discrimination, and showed that this effect is not accounted for by a higher bigram frequency of words than pseudowords. Without any explicit task, our results highlight the potential of an EEG fast periodic visual stimulation approach for understanding the representation of written language. Its development in the scientific community might be valuable to rapidly and objectively measure sensitivity to word processing in different human populations, including neuropsychological patients with dyslexia and other reading difficulties.
    Neuropsychologia 11/2014; DOI:10.1016/j.neuropsychologia.2014.11.007 · 3.30 Impact Factor
Show more