Monte Carlo Simulation Studies of EEG and MEG Localization Accuracy

Massachusetts General Hospital, NMR Center, Building 149, 13th Street, Charlestown, MA 02129, USA.
Human Brain Mapping (Impact Factor: 5.97). 05/2002; 16(1):47-62. DOI: 10.1002/hbm.10024
Source: PubMed


Both electroencephalography (EEG) and magnetoencephalography (MEG) are currently used to localize brain activity. The accuracy of source localization depends on numerous factors, including the specific inverse approach and source model, fundamental differences in EEG and MEG data, and the accuracy of the volume conductor model of the head (i.e., the forward model). Using Monte Carlo simulations, this study removes the effect of forward model errors and theoretically compares the use of EEG alone, MEG alone, and combined EEG/MEG data sets for source localization. Here, we use a linear estimation inverse approach with a distributed source model and a realistic forward head model. We evaluated its accuracy using the crosstalk and point spread metrics. The crosstalk metric for a specified location on the cortex describes the amount of activity incorrectly localized onto that location from other locations. The point spread metric provides the complementary measure: for that same location, the point spread describes the mis-localization of activity from that specified location to other locations in the brain. We also propose and examine the utility of a "noise sensitivity normalized" inverse operator. Given our particular forward and inverse models, our results show that 1) surprisingly, EEG localization is more accurate than MEG localization for the same number of sensors averaged over many source locations and orientations; 2) as expected, combining EEG with MEG produces the best accuracy for the same total number of sensors; 3) the noise sensitivity normalized inverse operator improves the spatial resolution relative to the standard linear estimation operator; and 4) use of an a priori fMRI constraint universally reduces both crosstalk and point spread.

Download full-text


Available from: Anders M Dale
  • Source
    • "In addition, MEG and EEG are differentially sensitive to near versus distant sources in the brain. More specifically, the magnetic lead field, or magnetometer sensitivity, falls off more quickly with MEG (Hämäläinen et al., 1993; Liu et al., 2002) with distance; and the drop off is yet steeper with the use of gradiometers (Baillet et al., 2001). Moreover, for a spherical head model, the MEG lead field is zero at the centre (i.e. at or near the brainstem), whereas the EEG lead field is non-zero (Hämäläinen et al., 1993). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Current hypotheses about language processing advocate an integral relationship between encoding of temporal information and linguistic processing in the brain. All such explanations must accommodate the evident ability of the perceptual system to process both slow and fast time scales in speech. However most cortical neurons are limited in their capability to precisely synchronise to temporal modulations at rates faster than about 50 Hz. Hence, a central question in auditory neurophysiology concerns how the full range of perceptually relevant modulation rates might be encoded in the cerebral cortex. Here we show with concurrent noninvasive magnetoencephalography (MEG) and electroencephalography (EEG) measurements that the human auditory cortex transitions between a phase-locked (PL) mode of responding to modulation rates below about 50 Hz, and a non phase-locked (NPL) mode at higher rates. Precisely such dual response modes are predictable from the behaviours of single neurons in auditory cortices of non-human primates. Our data point to a common mechanistic explanation for the single neuron and MEG/EEG results and support the hypothesis that two distinct types of neuronal encoding mechanisms are employed by the auditory cortex to represent a wide range of temporal modulation rates. This dual encoding model allows slow and fast modulations in speech to be processed in parallel and is therefore consistent with theoretical frameworks in which slow temporal modulations (such as rhythm or syllabic structure) are akin to the contours or edges of visual objects, whereas faster modulations (such as periodicity pitch or phonemic structure) are more like visual texture.
    Full-text · Article · Jan 2016 · NeuroImage
  • Source
    • "In this approach, the MEG and EEG source locations are restricted to the cortical mantle derived from anatomical MRI to reduce the potential solution space (Dale and Sereno, 1993). Additional improvements are achieved by combining the complementary information provided by simultaneously measured MEG and EEG, which helps provide better accuracy and smaller point spread of the source estimates than either modality alone (Ding and Yuan, 2013; Henson et al., 2009; Liu et al., 2002; Sharon et al., 2007). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150 ms) activity in posterior ACs, spreading to left anterior ACs at 250-450 ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8 Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32 Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.
    Preview · Article · Sep 2015 · NeuroImage
  • Source
    • "A recent successful approach to the problem of noise reduction of EEG signals is independent component analysis (ICA), which decomposes a multi-channel signal in a set of sources with maximally independent components (ICs). However, it has limitation on the number of separable ICs, N ICs from N electrodes, which makes the decomposition imperfect as the number of EEG sources is much higher than the number of ICs [3], [4], [5]. In addition, ICA-based methods require subjective decision making [6] or arbitrary tuning of the thresholds [7] to distinguish artifacted ICs from non-artifacted ICs, which is known as the permutation problem. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Data contamination by ocular artifacts such as eye blinks and eye movements is a major barrier that must be overcome when attempting to analyze electroencephalogram (EEG) and event-related potential (ERP) data. To handle this problem, a number of artifact removal methods has been proposed. Specifically, we focus on a method using a multi-channel Wiener filters based on a probabilistic generative model. This method assumes that the observed signal is the sum of multiple signals elicited by psychological or physical events, and separates the observed signal into each event signal using estimated model parameters. Based on this scheme, we have proposed a model parameter estimation method using prior information of each event signal. In this paper, we examine the potential of this model to deal with highly contaminated signals by collecting EEG data intentionally contaminated by eye blinks and relatively clean ERP data, and using them as prior information of each event signal. We conducted an experimental evaluation using a classical attention task. The results showed the proposed method effectively enhances the target ERP component while reducing the contamination caused by eye blinks.
    Full-text · Conference Paper · Aug 2015
Show more