Article

Head models and dynamic causal modeling of subcortical activity using magnetoencephalographic/electroencephalographic data.

Université Pierre et Marie Curie-Paris 6, Centre de Recherche de l’institut du Cerveau et de la Moelle épinière, UMR-S975, 75651 Paris, France.
Reviews in the neurosciences (Impact Factor: 3.26). 01/2012; 23(1):85-95. DOI: 10.1515/rns.2011.056
Source: PubMed

ABSTRACT Cognitive functions involve not only cortical but also subcortical structures. Subcortical sources, however, contribute very little to magnetoencephalographic (MEG) and electroencephalographic (EEG) signals because they are far from external sensors and their neural architectonic organization often makes them electromagnetically silent. Estimating the activity of deep sources from MEG and EEG (M/EEG) data is thus a challenging issue. Here, we review the influence of geometric parameters (location/orientation) on M/EEG signals produced by the main deep brain structures (amygdalo-hippocampal complex, thalamus and some basal ganglia). We then discuss several methods that have been utilized to solve the issues and localize or quantify the M/EEG contribution from deep neural currents. These methods rely on realistic forward models of subcortical regions or on introducing strong dynamical priors on inverse solutions that are based on biologically plausible neural models, such as those used in dynamic causal modeling (DCM) for M/EEG.

1 Bookmark
 · 
134 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Amygdala is a key brain region for face perception. While the role of amygdala in the perception of facial emotion and gaze has been extensively highlighted with fMRI, the unfolding in time of amydgala responses to emotional versus neutral faces with different gaze directions is scarcely known. Here we addressed this question in healthy subjects using MEG combined with an original source imaging method based on individual amygdala volume segmentation and the localization of sources in the amygdala volume. We found an early peak of amygdala activity that was enhanced for fearful relative to neutral faces between 130 and 170 ms. The effect of emotion was again significant in a later time range (310-350 ms). Moreover, the amygdala response was greater for direct relative averted gaze between 190 and 350 ms, and this effect was selective of fearful faces in the right amygdala. Altogether, our results show that the amygdala is involved in the processing and integration of emotion and gaze cues from faces in different time ranges, thus underlining its role in multiple stages of face perception.
    PLoS ONE 01/2013; 8(9):e74145. · 3.53 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Cross-modal activity in visual cortex of blind subjects has been reported during performance of variety of non-visual tasks. A key unanswered question is through which pathways non-visual inputs are funneled to the visual cortex. Here we used tomographic analysis of single trial magnetoencephalography (MEG) data recorded from one congenitally blind and two sighted subjects after stimulation of the left and right median nerves at three intensities: below sensory threshold, above sensory threshold and above motor threshold; the last sufficient to produce thumb twitching. We identified reproducible brain responses in the primary somatosensory (S1) and motor (M1) cortices at around 20 ms post-stimulus, which were very similar in sighted and blind subjects. Time-frequency analysis revealed strong 45-70 Hz activity at latencies of 20-50 ms in S1 and M1, and posterior parietal cortex Brodmann areas (BA) 7 and 40, which compared to lower frequencies, were substantially more pronounced in the blind than the sighted subjects. Critically, at frequencies from α-band up to 100 Hz we found clear, strong, and widespread responses in the visual cortex of the blind subject, which increased with the intensity of the somatosensory stimuli. Time-delayed mutual information (MI) revealed that in blind subject the stimulus information is funneled from the early somatosensory to visual cortex through posterior parietal BA 7 and 40, projecting first to visual areas V5 and V3, and eventually V1. The flow of information through this pathway occurred in stages characterized by convergence of activations into specific cortical regions. In sighted subjects, no linked activity was found that led from the somatosensory to the visual cortex through any of the studied brain regions. These results provide the first evidence from MEG that in blind subjects, tactile information is routed from primary somatosensory to occipital cortex via the posterior parietal cortex.
    Frontiers in Human Neuroscience 01/2013; 7:429. · 2.91 Impact Factor
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Human faces may signal relevant information and are therefore analysed rapidly and effectively by the brain. However, the precise mechanisms and pathways involved in rapid face processing are unclear. One view posits a role for a subcortical connection between early visual sensory regions and the amygdala, while an alternative account emphasises cortical mediation. To adjudicate between these functional architectures, we recorded magnetoencephalographic (MEG) evoked fields in human subjects to presentation of faces with varying emotional valence. Early brain activity was better explained by dynamic causal models containing a direct subcortical connection to the amygdala irrespective of emotional modulation. At longer latencies, models without a subcortical connection had comparable evidence. Hence, our results support the hypothesis that a subcortical pathway to the amygdala plays a role in rapid sensory processing of faces, in particular during early stimulus processing. This finding contributes to an understanding of the amygdala as a behavioural relevance detector.
    NeuroImage 08/2014; · 6.25 Impact Factor

Full-text

View
42 Downloads
Available from
May 30, 2014