Figure 1 - uploaded by Alexandre Gramfort
Content may be subject to copyright.
7: Cortical homunculus by Wilder Graves Penfield [174]. It represents the mapping the primary sensory (S1) and primary motor (M1) cortex. S1 lies on the posterior wall of the central sulcus (cf. post central gyrus in figure 1.6(a)) and M1 on the anterior part. These maps were established by direct electrical stimulation on patients during surgery. Primary auditory cortices (A1), left and right, are represented in the temporal lobes.
Source publication
The overall aim of this thesis is the development of novel electroencephalography (EEG) and magnetoencephalography (MEG) analysis methods to provide new insights to the functioning of the human brain. MEG and EEG are non-invasive techniques that measure outside of the head the electric potentials and the magnetic fields induced by the neuronal acti...
Similar publications
Localization of linear frequency modulation (LFM) source based on combination of time_ frequency analysis and spatial spectrum estimation technique has received extensive research. However, this scheme is always confined to far-field sources and suffers from high computational cost. In this letter, by performing the fractional Fourier transform (FR...
Citations
... However, the electrical potentials produced by neuronal activity are diffused by the human skull, due to its low conductivity. This leads to a noisy signal which corresponds to a rough measure of the actual brain activity [Gra09]. EEG is therefore not adapted to fine-grain functional brain mapping, which explains why fMRI is more widely used for that purpose. ...
This thesis contributes to the development of a probabilistic logic programming language specific to the domain of cognitive neuroscience, coined NeuroLang, and presents some of its applications to the meta-analysis of the functional brain mapping literature. By relying on logic formalisms such as datalog, and their probabilistic extensions, we show how NeuroLang makes it possible to combine uncertain and heterogeneous data to formulate rich meta-analytic hypotheses. We encode the Neurosynth database into a NeuroLang program and formulate probabilistic logic queries resulting in term-association brain maps and coactivation brain maps similar to those obtained with existing tools, and highlighting existing brain networks. We prove the correctness of our model by using the joint probability distribution defined by the Bayesian network translation of probabilistic logic programs, showing that queries lead to the same estimations as Neurosynth. Then, we show that modeling term-to-study associations probabilistically based on term frequency-document inverse frequency (TF-IDF) measures results in better accuracy on simulated data, and a better consistency on real data, for two-term conjunctive queries on smaller sample sizes. Finally, we use NeuroLang to formulate and test concrete functional brain mapping hypotheses, reproducing past results. By solving segregation logic queries combining the Neurosynth database, topic models, and the data-driven functional atlas DiFuMo, we find supporting evidence of the existence of an heterogeneous organisation of the frontoparietal control network (FPCN), and find supporting evidence that the subregion of the fusiform gyrus called visual word form area (VWFA) is recruited within attentional tasks, on top of language-related cognitive tasks.
... In this model, 2004 dipolar current sources were placed evenly on the cortical surface and 58 sensors were placed on the scalp according to the extended 10-20 system [107]. Finally, the lead field matrix was computed using the finite element method (FEM) for a given head geometry and exploiting the quasi-static approximation of Maxwell's equations [14,19,36,108]. ...
Several problems in neuroimaging and beyond require inference on the parameters of multi-task sparse hierarchical regression models. Examples include M/EEG inverse problems, neural encoding models for task-based fMRI analyses, and temperature monitoring of climate or CPU and GPU. In these domains, both the model parameters to be inferred and the measurement noise may exhibit a complex spatio-temporal structure. Existing work either neglects the temporal structure or leads to computationally demanding inference schemes. Overcoming these limitations, we devise a novel flexible hierarchical Bayesian framework within which the spatio-temporal dynamics of model parameters and noise are modeled to have Kronecker product covariance structure. Inference in our framework is based on majorization-minimization optimization and has guaranteed convergence properties. Our highly efficient algorithms exploit the intrinsic Riemannian geometry of temporal autocovariance matrices. For stationary dynamics described by Toeplitz matrices, the theory of circulant embeddings is employed. We prove convex bounding properties and derive update rules of the resulting algorithms. On both synthetic and real neural data from M/EEG, we demonstrate that our methods lead to improved performance.
... The linear forward mapping from X to Y is given by the lead field matrix L ∈ ℝ M × N , which is here assumed to be known. In practice, L can be computed using discretization methods such as the Finite Element Method (FEM) for a given head geometry and known electrical conductivities using the quasi-static approximation of Maxwell's equations (Baillet et al., 2001;Gramfort, 2009;Hämäläinen et al., 1993;Huang et al., 2016). ...
Methods for electro- or magnetoencephalography (EEG/MEG) based brain source imaging (BSI) using sparse Bayesian learning (SBL) have been demonstrated to achieve excellent performance in situations with low numbers of distinct active sources, such as event-related designs. This paper extends the theory and practice of SBL in three important ways. First, we reformulate three existing SBL algorithms under the majorization-minimization (MM) framework. This unification perspective not only provides a useful theoretical framework for comparing different algorithms in terms of their convergence behavior, but also provides a principled recipe for constructing novel algorithms with specific properties by designing appropriate bounds of the Bayesian marginal likelihood function. Second, building on the MM principle, we propose a novel method called LowSNR-BSI that achieves favorable source reconstruction performance in low signal-to-noise-ratio (SNR) settings. Third, precise knowledge of the noise level is a crucial requirement for accurate source reconstruction. Here we present a novel principled technique to accurately learn the noise variance from the data either jointly within the source reconstruction procedure or using one of two proposed cross-validation strategies. Empirically, we could show that the monotonous convergence behavior predicted from MM theory is confirmed in numerical experiments. Using simulations, we further demonstrate the advantage of LowSNR-BSI over conventional SBL in low-SNR regimes, and the advantage of learned noise levels over estimates derived from baseline data. To demonstrate the usefulness of our novel approach, we show neurophysiologically plausible source reconstructions on averaged auditory evoked potential data.
... However, the speed of convergence of IRLS solver was not very competitive compared to the advanced methods of solving the optimization problem with l 1 -norm prior, and the method suffered from potential numerical instabilities due to the precision limitation in the calculation of the inverse matrix. 28 The IRLS method was utilized in the FOCal Underdetermined System Solver (FOCUSS) algorithm to solve the l 0 -norm prior problem 25,29 and more generally, l p -norm penalization problem with p≤ 1. 30 In the context of cosparse signal recovery Giryes et al 31,32 proposed an analysis version of IHT and HTP, called analysis IHT (AIHT) and analysis HTP (AHTP). Both AIHT and AHTP methods have been accompanied with recovery guarantees analogous to the RIPbased guarantees, which are applied for the synthesis equivalent methods IHT and HTP. ...
In the past decade, compressed sensing (CS) has provided an efficient framework for signal compression and recovery as the intermediate steps in signal processing. The well‐known greedy analysis algorithm, called Greedy Analysis Pursuit (GAP) has the capability of recovering the signals from a restricted number of measurements. In this article, we propose an extension to the GAP to solve the weighted optimization problem satisfying an inequality constraint based on the Lorentzian cost function to modify the EEG signal reconstruction in the presence of heavy‐tailed impulsive noise. Numerical results illustrate the effectiveness of our proposed algorithm, called enhanced weighted GAP (ewGAP) to reinforce the efficiency of the signal reconstruction and provide an appropriate candidate for compressed sensing of the EEG signals. The suggested algorithm achieves promising reconstruction performance and robustness that outperforms other analysis‐based approaches such as GAP, Analysis Subspace Pursuit (ASP), and Analysis Compressive Sampling Matching Pursuit (ACoSaMP).
... As a consequence, the minimum-norm estimation is biased toward the superficial sources. 15 To cope with this problem the weighted minimum norm (WMN) estimation was proposed that weighs the sources proportional to the L2-norm of the forward field originating from a unit source. Other methods based on L2-norm include Low Resolution Brain Electromagnetic Tomography (LORETA) and its variants, namely, Standardized LORETA (sLORETA) and exact LORETA (eLORETA) that basically applies a Laplacian operator to the sources. ...
... ϵ is the electrical permittivity of the medium and μ is the magnetic permeability. It is commonly accepted that the time frequencies of the brain electromagnetic field that can be observed outside the head can rarely exceed 100 Hz. 15 Therefore, quasi-static approximation can be used by omitting time derivative in Maxwell's equations. Applying divergence operator to Equation (2) yields: ...
Brain source imaging based on EEG aims to reconstruct the neural activities producing the scalp potentials. This includes solving the forward and inverse problems. The aim of the inverse problem is to estimate the activity of the brain sources based on the measured data and leadfield matrix computed in the forward step. Spatial filtering, also known as beamforming, is an inverse method that reconstructs the time course of the source at a particular location by weighting and linearly combining the sensor data. In this paper, we considered a temporal assumption related to the time course of the source, namely sparsity, in the Linearly Constrained Minimum Variance (LCMV) beamformer. This assumption sounds reasonable since not all brain sources are active all the time such as epileptic spikes and also some experimental protocols such as electrical stimulations of a peripheral nerve can be sparse in time. Developing the sparse beamformer is done by incorporating L1‐norm regularization of the beamformer output in the relevant cost function while obtaining the filter weights. We called this new beamformer SParse LCMV (SP‐LCMV). We compared the performance of the SP‐LCMV with that of LCMV for both superficial and deep sources with different amplitudes using synthetic EEG signals. Also, we compared them in localization and reconstruction of sources underlying electric median nerve stimulation. Results show that the proposed sparse beamformer can enhance reconstruction of sparse sources especially in the case of sources with high amplitude spikes.
... To investigate the performance of the proposed method on synthetic data, an EEG signal has been simulated using the publicly available EMBAL toolbox 4 [5]. For one of the simulated electrodes, the ground truth signal is illustrated in Fig. 1 (blue). ...
... The mixture is characterized by the lead field matrix G ∈ N D × , which describes the attenuation inflicted on the dipole signals during the diffusion in the head volume conductor. Given a head model and a source space, the lead field matrix can be computed numerically using a Boundary Element Method (BEM) (Gramfort, 2009). ...
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients.
... Given a head model and a Becker et al. NeuroImage xxx (2017) xxx-xxx source space, the lead field matrix can be computed numerically using a Boundary Element Method (BEM) (Gramfort, 2009). An extended source, also referred to as a patch, corresponds to a contiguous area of cortex with highly correlated activities and can be modeled by a number of adjacent grid dipoles with synchronized signals. ...
Over the past decades, a multitude of different brain source imaging algorithms have been developed to identify the neural generators underlying the surface electroencephalography measurements. While most of these techniques focus on determining the source positions, only a small number of recently developed algorithms provides an indication of the spatial extent of the distributed sources. In a recent comparison of brain source imaging approaches, the VB-SCCD algorithm has been shown to be one of the most promising algorithms among these methods. However, this technique suffers from several problems: it leads to amplitude-biased source estimates, it has difficulties in separating close sources, and it has a high computational complexity due to its implementation using second order cone programming. To overcome these problems, we propose to include an additional regularization term that imposes sparsity in the original source domain and to solve the resulting optimization problem using the alternating direction method of multipliers. Furthermore, we show that the algorithm yields more robust solutions by taking into account the temporal structure of the data. We also propose a new method to automatically threshold the estimated source distribution, which permits to delineate the active brain regions. The new algorithm, called Source Imaging based on Structured Sparsity (SISSY), is analyzed by means of realistic computer simulations and is validated on the clinical data of four patients.
... When an AP arrives at the end of an axon terminal, it leads to the release of neurotransmitters. These neurotransmitters reach other neurons and affect their membrane permeability so that specific ions (Sodium (N a) and Potassium (K) ) penetrate inside the neuron (Gramfort, 2009). Figure 1.8 shows an example of an action potential. ...
... Another condition to measure the brain activity is that postsynaptic potentials (PSP) must have the same direction to add up. Contrary to the stellate neurons whose dendrites are oriented in all directions, pyramidal cells, which constitute about 70%-80% of the neocortex (Gramfort, 2009), are orthogonal to the cortical surface and thus well designed to generate PSPs in the aligned directions. ...
... For human tissues, the electric permittivity r = 0 varies a lot depending on tissue and frequency whereas the magnetic permeability µ is the same as for vacuum (µ 0 ). At a frequency of 100 Hz, r is around 4 × 10 6 for gray matter, 5 × 10 5 for fat and 6 × 10 3 for compact bone (Gabriel et al., 1996;Gramfort, 2009). The solution of the forward problem does not depend on , because it is multiplied in Equation 2.1d by ∂E ∂t which will be neglected due to the Quasi-static approximation. ...
Understanding how brain regions interact to perform a given task is a very challenging task. Electroencephalography (EEG) and Magnetoencephalography (MEG) are two non-invasive functional imaging modalities used to record brain activity with high temporal resolution. As estimating brain activity from these measurements is an ill-posed problem, We thus must set a prior on the sources to obtain a unique solution. It has been shown in previous studies that structural homogeneity of brain regions could reflect their functional homogeneity. One of the main goals of this work is to use this structural information to define priors to constrain more anatomically the MEG/EEG source reconstruction problem.This structural information is obtained using diffusion magnetic resonance imaging (dMRI), which is, as of today, the unique non-invasive structural imaging modality that provides an insight on the structural organization of white matter. This makes its use to constrain the EEG/MEG inverse problem justified. In our work, dMRI information is used to reconstruct brain activation intwo ways:1- In a spatial method which uses brain parcels to constrain the sources activity. These parcels are obtained by our whole brain parcellation algorithm which computes cortical regions with the most structural homogeneity with respect to a similarity measure.2- In a spatio-temporal method that makes use of the anatomical connections computed from dMRI to constrain the sources' dynamics.These different methods are validated using synthetic and real data.
... Although the focalization is greatly improved, these methods fail to estimate the extent of the sources since the reconstructed source is overfocused. To address this issue, efforts have been devoted to exploring sparsity on transform domains of the current density, such as the spatial Laplacian domain (Haufe et al., 2008;Vega-Hernández et al., 2008;Chang et al., 2010) , wavelet-basis domain (Chang et al., 2010;Liao et al., 2012;Zhu et al., 2014), Gaussianbasis domain (Haufe et al., 2011), or variation domain (Adde et al., 2005;Ding, 2009;Gramfort, 2009;Luessi et al., 2011;Becker et al., 2014;Sohrabpour et al., 2016). Furthermore, in order to obtain a local smooth and global sparse result, some approaches impose sparsity on both the transform domain and the original source domain. ...
... TV based methods assume the intensity of the source to be uniformly distributed in space, hence fail to reflect the intensity variation of the sources. This effect becomes more obvious when the regularization parameter increases, resulting in even more flat intensity distribution (Gramfort, 2009). By contrast, the proposed method s-SMOOTH assumes the intensity of the adjacent dipoles to be piecewise polynomial, resulting in a brain image which is very smooth that recovers the magnitude variation within a source precisely (Figure 7). ...
EEG source imaging enables us to reconstruct current density in the brain from the electrical measurements with excellent temporal resolution (~ ms). The corresponding EEG inverse problem is an ill-posed one that has infinitely many solutions. This is due to the fact that the number of EEG sensors is usually much smaller than that of the potential dipole locations, as well as noise contamination in the recorded signals. To obtain a unique solution, regularizations can be incorporated to impose additional constraints on the solution. An appropriate choice of regularization is critically important for the reconstruction accuracy of a brain image. In this paper, we propose a novel Sparsity and SMOOthness enhanced brain TomograpHy (s-SMOOTH) method to improve the reconstruction accuracy by integrating two recently proposed regularization techniques: Total Generalized Variation (TGV) regularization and ℓ1−2 regularization. TGV is able to preserve the source edge and recover the spatial distribution of the source intensity with high accuracy. Compared to the relevant total variation (TV) regularization, TGV enhances the smoothness of the image and reduces staircasing artifacts. The traditional TGV defined on a 2D image has been widely used in the image processing field. In order to handle 3D EEG source images, we propose a voxel-based Total Generalized Variation (vTGV) regularization that extends the definition of second-order TGV from 2D planar images to 3D irregular surfaces such as cortex surface. In addition, the ℓ1−2 regularization is utilized to promote sparsity on the current density itself. We demonstrate that ℓ1−2 regularization is able to enhance sparsity and accelerate computations than ℓ1 regularization. The proposed model is solved by an efficient and robust algorithm based on the difference of convex functions algorithm (DCA) and the alternating direction method of multipliers (ADMM). Numerical experiments using synthetic data demonstrate the advantages of the proposed method over other state-of-the-art methods in terms of total reconstruction accuracy, localization accuracy and focalization degree. The application to the source localization of event-related potential data further demonstrates the performance of the proposed method in real-world scenarios.