Preprint

Brain-Network Clustering via Kernel-ARMA Modeling and the Grassmannian

Authors:
Preprints and early-stage research may not have been peer reviewed yet.
To read the file of this research, you can request a copy directly from the authors.

Abstract

Recent advances in neuroscience and in the technology of functional magnetic resonance imaging (fMRI) and electro-encephalography (EEG) have propelled a growing interest in brain-network clustering via time-series analysis. Notwithstanding, most of the brain-network clustering methods revolve around state clustering and/or node clustering (a.k.a. community detection or topology inference) within states. This work answers first the need of capturing non-linear nodal dependencies by bringing forth a novel feature-extraction mechanism via kernel autoregressive-moving-average modeling. The extracted features are mapped to the Grassmann manifold (Grassmannian), which consists of all linear subspaces of a fixed rank. By virtue of the Riemannian geometry of the Grassmannian, a unifying clustering framework is offered to tackle all possible clustering problems in a network: Cluster multiple states, detect communities within states, and even identify/track subnetwork state sequences. The effectiveness of the proposed approach is underlined by extensive numerical tests on synthetic and real fMRI/EEG data which demonstrate that the advocated learning method compares favorably versus several state-of-the-art clustering schemes.

No file available

Request Full-text Paper PDF

To read the file of this research,
you can request a copy directly from the authors.

ResearchGate has not been able to resolve any citations for this publication.
Article
Full-text available
Many time-evolving systems in nature, society and technology leave traces of the interactions within them. These interactions form temporal networks that reflect the states of the systems. In this work, we pursue a coarse-grained description of these systems by proposing a method to assign discrete states to the systems and inferring the sequence of such states from the data. Such states could, for example, correspond to a mental state (as inferred from neuroimaging data) or the operational state of an organization (as inferred by interpersonal communication). Our method combines a graph distance measure and hierarchical clustering. Using several empirical data sets of social temporal networks, we show that our method is capable of inferring the system's states such as distinct activities in a school and a weekday state as opposed to a weekend state. We expect the methods to be equally useful in other settings such as temporally varying protein interactions, ecological interspecific interactions, functional connectivity in the brain and adaptive social networks.
Article
Full-text available
The field of neuroscience is facing an unprecedented expanse in the volume and diversity of available data. Traditionally, network models have provided key insights into the structure and function of the brain. With the advent of big data in neuroscience, both more sophisticated models capable of characterizing the increasing complexity of the data and novel methods of quantitative analysis are needed. Recently multilayer networks, a mathematical extension of traditional networks, have gained increasing popularity in neuroscience due to their ability to capture the full information of multi-model, multi-scale, spatiotemporal data sets. Here, we review multilayer networks and their applications in neuroscience, showing how incorporating the multilayer framework into network neuroscience analysis has uncovered previously hidden features of brain networks. We specifically highlight the use of multilayer networks to model disease, structure-function relationships, network evolution, and link multi-scale data. Finally, we close with a discussion of promising new directions of multilayer network neuroscience research and propose a modified definition of multilayer networks designed to unite and clarify the use of the multilayer formalism in describing real-world systems.
Article
Full-text available
Recent work has revealed frequency-dependent global patterns of information flow bya network analysis of magnetoencephalography data of the human brain. However, itis unknown which properties on a small subgraph-scale of those functional brainnetworks are dominant at different frequencies bands. Motifs are the building blocks ofnetworks on this level and have previously been identified as important features forhealthy and abnormal brain function. In this study, we present a network constructionthat enables us to search and analyze motifs in different frequency bands. We giveevidence that the bi-directional two-hop path is the most important motif for theinformation flow in functional brain networks. A clustering based on this motif exposesa spatially coherent yet frequency-dependent sub-division between the posterior,occipital and frontal brain regions.
Article
Full-text available
Several research studies have shown that complex networks modeling real-world phenomena are characterized by striking properties: (i) they are organized according to community structure, and (ii) their structure evolves with time. Many researchers have worked on methods that can efficiently unveil substructures in complex networks, giving birth to the field of community discovery. A novel and fascinating problem started capturing researcher interest recently: the identification of evolving communities. Dynamic networks can be used to model the evolution of a system: nodes and edges are mutable, and their presence, or absence, deeply impacts the community structure that composes them. This survey aims to present the distinctive features and challenges of dynamic community discovery and propose a classification of published approaches. As a “user manual,” this work organizes state-of-the-art methodologies into a taxonomy, based on their rationale, and their specific instantiation. Given a definition of network dynamics, desired community characteristics, and analytical needs, this survey will support researchers to identify the set of approaches that best fit their needs. The proposed classification could also help researchers choose in which direction to orient their future research.
Article
Full-text available
Cognitive flexibility describes the human ability to switch between modes of mental function to achieve goals. Mental switching is accompanied by transient changes in brain activity, which must occur atop an anatomical architecture that bridges disparate cortical and subcortical regions by underlying white matter tracts. However, an integrated perspective regarding how white matter networks might constrain brain dynamics during cognitive processes requiring flexibility has remained elusive. To address this challenge, we applied emerging tools from graph signal processing to decompose BOLD signals based on diffusion imaging tractography in 28 individuals performing a perceptual task that probed cognitive flexibility. We found that the alignment between functional signals and the architecture of the underlying white matter network was associated with greater cognitive flexibility across subjects. Signals with behaviorally-relevant alignment were concentrated in the basal ganglia and anterior cingulate cortex, consistent with cortico-striatal mechanisms of cognitive flexibility. Importantly, these findings are not accessible to unimodal analyses of functional or anatomical neuroimaging alone. Instead, by taking a generalizable and concise reduction of multimodal neuroimaging data, we uncover an integrated structure-function driver of human behavior.
Article
Full-text available
Most research on mind-wandering has characterized it as a mental state with contents that are task unrelated or stimulus independent. However, the dynamics of mind-wandering - how mental states change over time - have remained largely neglected. Here, we introduce a dynamic framework for understanding mind-wandering and its relationship to the recruitment of large-scale brain networks. We propose that mind-wandering is best understood as a member of a family of spontaneous-thought phenomena that also includes creative thought and dreaming. This dynamic framework can shed new light on mental disorders that are marked by alterations in spontaneous thought, including depression, anxiety and attention deficit hyperactivity disorder.
Article
Full-text available
Forensic electroencephalogram (EEG)-based lie detection has recently begun using the concealed information test (CIT) as a potentially more robust alternative to the classical comparative questions test. The main problem with using CIT is that it requires an objective and fast decision algorithm under the constraint of limited available information. In this study, we developed a simple and feasible hierarchical knowledge base construction and test method for efficient concealed information detection based on objective EEG measures. We describe how a hierarchical feature space was formed and which level of the feature space was sufficient to accurately predict concealed information from the raw EEG signal in a short time. A total of 11 subjects went through an autobiographical paradigm test. A high accuracy of 95.23% in recognizing concealed information with a single EEG electrode within about 20 seconds demonstrates effectiveness of the method.
Article
Full-text available
We present The Virtual Brain (TVB), a neuroinformatics platform for full brain network simulations using biologically realistic connectivity. This simulation environment enables the model-based inference of neurophysiological mechanisms across different brain scales that underlie the generation of macroscopic neuroimaging signals including functional MRI (fMRI), EEG and MEG. Researchers from different backgrounds can benefit from an integrative software platform including a supporting framework for data management (generation, organization, storage, integration and sharing) and a simulation core written in Python. TVB allows the reproduction and evaluation of personalized configurations of the brain by using individual subject data. This personalization facilitates an exploration of the consequences of pathological changes in the system, permitting to investigate potential ways to counteract such unfavorable processes. The architecture of TVB supports interaction with MATLAB packages, for example, the well known Brain Connectivity Toolbox. TVB can be used in a client-server configuration, such that it can be remotely accessed through the Internet thanks to its web-based HTML5, JS, and WebGL graphical user interface. TVB is also accessible as a standalone cross-platform Python library and application, and users can interact with the scientific core through the scripting interface IDLE, enabling easy modeling, development and debugging of the scientific kernel. This second interface makes TVB extensible by combining it with other libraries and modules developed by the Python scientific community. In this article, we describe the theoretical background and foundations that led to the development of TVB, the architecture and features of its major software components as well as potential neuroscience applications.
Article
Full-text available
We give simple formulas for the canonical metric, gradient, Lie derivative, Riemannian connection, parallel translation, geodesics and distance on the Grassmann manifold of p-planes in R n . In these formulas, p-planes are represented as the column space of n×p matrices. The Newton method on abstract Riemannian manifolds proposed by Smith is made explicit on the Grassmann manifold. Two applications – computing an invariant subspace of a matrix and the mean of subspaces – are worked out.
Article
Full-text available
Spontaneous fluctuations are a hallmark of recordings of neural signals, emergent over time scales spanning milliseconds and tens of minutes. However, investigations of intrinsic brain organization based on resting-state functional magnetic resonance imaging have largely not taken into account the presence and potential of temporal variability, as most current approaches to examine functional connectivity (FC) implicitly assume that relationships are constant throughout the length of the recording. In this work, we describe an approach to assess whole-brain FC dynamics based on spatial independent component analysis, sliding time window correlation, and k-means clustering of windowed correlation matrices. The method is applied to resting-state data from a large sample (n = 405) of young adults. Our analysis of FC variability highlights particularly flexible connections between regions in lateral parietal and cingulate cortex, and argues against a labeling scheme where such regions are treated as separate and antagonistic entities. Additionally, clustering analysis reveals unanticipated FC states that in part diverge strongly from stationary connectivity patterns and challenge current descriptions of interactions between large-scale networks. Temporal trends in the occurrence of different FC states motivate theories regarding their functional roles and relationships with vigilance/arousal. Overall, we suggest that the study of time-varying aspects of FC can unveil flexibility in the functional coordination between different neural systems, and that the exploitation of these dynamics in further investigations may improve our understanding of behavioral shifts and adaptive processes.
Article
Full-text available
There are many situations in which indicators of changes or anomalies in communication networks can be helpful, e.g. in the identification of faults. A dynamic communication network is characterised as a series of graphs with vertices representing IP addresses and edges representing information exchange between these entities weighted by packets sent. Ten graph distance metrics are used to create time series of network changes by sequentially comparing graphs from adjacent periods. These time series are individually modelled as univariate autoregressive moving average (ARMA) processes. Each time series is assessed on the ability of the best ARMA model of it to identify anomalies through residual thresholding.
Article
Full-text available
Receiver operating characteristics (ROC) graphs are useful for organizing classifiers and visualizing their performance. ROC graphs are commonly used in medical decision making, and in recent years have been used increasingly in machine learning and data mining research. Although ROC graphs are apparently simple, there are some common misconceptions and pitfalls when using them in practice. The purpose of this article is to serve as an introduction to ROC graphs and as a guide for using them in research.
Article
Full-text available
Optical flow cannot be computed locally, since only one independent measurement is available from the image sequence at a point, while the flow velocity has two components. A second constraint is needed. A method for finding the optical flow pattern is presented which assumes that the apparent velocity of the brightness pattern varies smoothly almost everywhere in the image. An iterative implementation is shown which successfully computes the optical flow for a number of synthetic image sequences. The algorithm is robust in that it can handle image sequences that are quantized rather coarsely in space and time. It is also insensitive to quantization of brightness levels and additive noise. Examples are included where the assumption of smoothness is violated at singular points or along lines in the image.
Article
Full-text available
Epileptic seizures have been considered sudden and unpredictable events for centuries. A seizure seems to occur when a massive group of neurons in the cerebral cortex begins to discharge in a highly organized rhythmic pattern, then it develops according to some poorly described dynamics. As proved by the results reported by different research groups, seizures appear not completely random and unpredictable events. Thus, it is reasonable to wonder when, where and why the epileptogenic processes start up in the brain and how they result in a seizure. In order to detect these phenomena from the very beginning (hopefully minutes before the seizure itself), we introduced a technique, based on entropy topography, that studies the synchronization of the electric activity of neuronal sources in the brain. We tested it over 3 EEG data set from patients affected by partial epilepsy and 25 EEG recordings from patients affected by generalized seizures as well as over 40 recordings from healthy subjects. Entropy showed a very steady spatial distribution and appeared linked to the brain zone where seizures originated. A self-organizing map-based spatial clustering of entropy topography showed that the critical electrodes shared the same cluster long time before the seizure onset. The healthy subjects showed a more random behaviour.
Conference Paper
Full-text available
Using machine learning algorithms to decode intended behavior from neural activity serves a dual purpose. First, these tools allow patients to interact with their environment through a Brain-Machine Interface (BMI). Second, analyzing the characteristics of such methods can reveal the relative significance of various features of neural activity, task stimuli, and behavior. In this study we adapted, implemented and tested a machine learning method called Kernel Auto-Regressive Moving Average (KARMA), for the task of inferring movements from neural activity in primary motor cortex. Our version of this algorithm is used in an online learning setting and is updated after a sequence of inferred movements is completed. We first used it to track real hand movements executed by a monkey in a standard 3D reaching task. We then applied it in a closed-loop BMI setting to infer intended movement, while the monkey's arms were comfortably restrained, thus performing the task using the BMI alone. KARMA is a recurrent method that learns a nonlinear model of output dynamics. It uses similarity functions (termed kernels) to compare between inputs. These kernels can be structured to incorporate domain knowledge into the method. We compare KARMA to various state-of-the-art methods by evaluating tracking performance and present results from the KARMA based BMI experiments.
Article
Full-text available
Network Notation Networks are often characterized by clusters of constituents that interact more closely with each other and have more connections to one another than they do with the rest of the components of the network. However, systematically identifying and studying such community structure in complicated networks is not easy, especially when the network interactions change over time or contain multiple types of connections, as seen in many biological regulatory networks or social networks. Mucha et al. (p. 876 ) developed a mathematical method to allow detection of communities that may be critical functional units of such networks. Application to real-world tasks—like making sense of the voting record in the U.S. Senate—demonstrated the promise of the method.
Article
Full-text available
Complex networks can often be divided in dense sub-networks called communities. Using a partition edit distance, we study how three community detection algorithms transform their outputs if the input network is slightly modified. The instabilities appear to be important and we propose a modification of one algorithm to stabilize it and to allow the tracking of the communities in an evolving network. This modification has one parameter which is a tradeoff between stability and quality. The resulting algorithm appears to be very effective. We finally use it on an evolving network of blogs.
Article
Full-text available
Resting-state functional connectivity studies with fMRI showed that the brain is intrinsically organized into large-scale functional networks for which the hemodynamic signature is stable for about 10s. Spatial analyses of the topography of the spontaneous EEG also show discrete epochs of stable global brain states (so-called microstates), but they remain quasi-stationary for only about 100 ms. In order to test the relationship between the rapidly fluctuating EEG-defined microstates and the slowly oscillating fMRI-defined resting states, we recorded 64-channel EEG in the scanner while subjects were at rest with their eyes closed. Conventional EEG-microstate analysis determined the typical four EEG topographies that dominated across all subjects. The convolution of the time course of these maps with the hemodynamic response function allowed to fit a linear model to the fMRI BOLD responses and revealed four distinct distributed networks. These networks were spatially correlated with four of the resting-state networks (RSNs) that were found by the conventional fMRI group-level independent component analysis (ICA). These RSNs have previously been attributed to phonological processing, visual imagery, attention reorientation, and subjective interoceptive-autonomic processing. We found no EEG-correlate of the default mode network. Thus, the four typical microstates of the spontaneous EEG seem to represent the neurophysiological correlate of four of the RSNs and show that they are fluctuating much more rapidly than fMRI alone suggests.
Article
Dynamic community detection provides a coherent description of network clusters over time, allowing one to track the growth and death of communities as the network evolves. However, modularity maximization, a popular method for performing multilayer community detection, requires the specification of an appropriate null network as well as resolution and interlayer coupling parameters. Importantly, the ability of the algorithm to accurately detect community evolution is dependent on the choice of these parameters. In functional temporal networks, where evolving communities reflect changing functional relationships between network nodes, it is especially important that the detected communities reflect any state changes of the system. Here, we present analytical work suggesting that a uniform null network provides improved sensitivity to the detection of small evolving communities in temporal networks with positive edge weights bounded above by 1, such as certain types of correlation networks. We then propose a method for increasing the sensitivity of modularity maximization to state changes in nodal dynamics by modelling self-identity links between layers based on the self-similarity of the network nodes between layers. This method is more appropriate for functional temporal networks from both a modelling and mathematical perspective, as it incorporates the dynamic nature of network nodes. We motivate our method based on applications in neuroscience where network nodes represent neurons and functional edges represent similarity of firing patterns in time. We show that in simulated data sets of neuronal spike trains, updating interlayer links based on the firing properties of the neurons provides superior community detection of evolving network structure when groups of neurons change their firing properties over time. Finally, we apply our method to experimental calcium imaging data that monitors the spiking activity of hundreds of neurons to track the evolution of neuronal communities during a state change from the awake to anaesthetized state.
Article
Network topology inference is a significant problem in network science. Most graph signal processing (GSP) efforts to date assume that the underlying network is known and then analyze how the graph?s algebraic and spectral characteristics impact the properties of the graph signals of interest. Such an assumption is often untenable beyond applications dealing with, e.g., directly observable social and infrastructure networks; and typically adopted graph construction schemes are largely informal, distinctly lacking an element of validation. This article offers an overview of graph-learning methods developed to bridge the aforementioned gap, by using information available from graph signals to infer the underlying graph topology. Fairly mature statistical approaches are surveyed first, where correlation analysis takes center stage along with its connections to covariance selection and high-dimensional regression for learning Gaussian graphical models. Recent GSP-based network inference frameworks are also described, which postulate that the network exists as a latent underlying structure and that observations are generated as a result of a network process defined in such a graph. A number of arguably more nascent topics are also briefly outlined, including inference of dynamic networks and nonlinear models of pairwise interaction, as well as extensions to directed (di) graphs and their relation to causal inference. All in all, this article introduces readers to challenges and opportunities for SP research in emerging topic areas at the crossroads of modeling, prediction, and control of complex behavior arising in networked systems that evolve over time.
Article
The temporal structure of self-generated cognition is a key attribute to the formation of a meaningful stream of consciousness. When at rest, our mind wanders from thought to thought in distinct mental states. Despite the marked importance of ongoing mental processes, it is challenging to capture and relate these states to specific cognitive contents. In this work, we employed ultra-high field functional magnetic resonance imaging (fMRI) and high-density electroencephalography (EEG) to study the ongoing thoughts of participants instructed to retrieve self-relevant past episodes for periods of 22sec. These task-initiated, participant-driven activity patterns were compared to a distinct condition where participants performed serial mental arithmetic operations, thereby shifting from self-related to self-unrelated thoughts. BOLD activity mapping revealed selective enhanced activity in temporal, parietal and occipital areas during the memory compared to the mental arithmetic condition, evincing their role in integrating the re-experienced past events into conscious representations during memory retrieval. Functional connectivity analysis showed that these regions were organized in two major subparts, previously associated to “scene-reconstruction” and “self-experience” subsystems. EEG microstate analysis allowed studying these participant-driven thoughts in the millisecond range by determining the temporal dynamics of brief periods of stable scalp potential fields. This analysis revealed selective modulation of occurrence and duration of specific microstates in the memory and in the mental arithmetic condition, respectively. EEG source analysis revealed similar spatial distributions of the sources of these microstates and the regions identified with fMRI. These findings imply a functional link between BOLD activity changes in regions related to a certain mental activity and the temporal dynamics of mentation, and support growing evidence that specific fMRI networks can be captured with EEG as repeatedly occurring brief periods of integrated coherent neuronal activity, lasting only fractions of seconds.
Article
This paper advocates Riemannian multi-manifold modeling for network-wide time-series analysis: brain-network data yield features which are viewed as points in or close to a union of multiple submanifolds of a Riemannian manifold. Distinguishing disparate time series amounts to clustering multiple Riemannian submanifolds. To this end, two feature-generation schemes for network-wide dynamic time series are put forth. The first one is motivated by Granger-causality arguments and uses an auto-regressive moving average model to map low-rank linear vector subspaces, spanned by column vectors of observability matrices, to points into the Grassmann manifold. The second one utilizes (non-linear) dependencies among network nodes by introducing kernel-based partial correlations to generate points in the manifold of positive-definite matrices. Capitilizing on recently developed research on clustering Riemannian submanifolds, an algorithm is provided to distinguish time series based on their Riemannian-geometry properties. Extensive numerical tests on synthetic and real fMRI data demonstrate that the proposed framework outperforms classical and state-of-the-art techniques in clustering brain-network states/structures.
Article
Electroencephalography (EEG) measures the electrical activity of brain that is generated by the synchronized activity of thousands of neurons. In this paper, our first goal is to develop a novel method to track the EEG activation in different brain regions involved in the processing of target and non-target stimuli in oddball paradigm. Secondly we want to identify the difference in the pattern of activation for different oddball tasks. The EEG data has been acquired from twenty healthy volunteers for the visual oddball experiment. In the task two types of visual stimuli target (rare) and non-target (frequent) were randomly presented. The subjects were instructed to press the enter button when they identify the target stimuli. The EEG data acquired is converted into EEG topo-maps. In our method the flow of activation between consecutive topo-maps is estimated by using Horn and Schunck Optical Flow estimation method. It helps to generate the motion field between consecutive topo-maps which is considered as flow of activation between two time frames. Different motion vectors are clustered into a group based on the activation level. These clusters are tracked between different frames as a measure of the activation flow. Finally we analyze the flow of activation across different brain lobes for different cases encountered in the Oddball paradigm by plotting average activation graph with respect to time. Analysis of the data has revealed that high activation has been observed in the Frontal and Occipital lobes in general for the oddball task. Frontal lobe shows high activation for target with response case, this is followed by the Occipital lobe. The activation in frontal lobe starts increasing from frame no 60 (240. ms) (out of 125 frames). Occipital lobe shows high activation for target with no response and no target no response cases in the region from 40 to 60 frames. Parietal lobe shows high activation for target with response near the end of task from frame no. 100 onwards. Hence, we have been able to identify different patterns in the activation flow that differentiates different Oddball tasks. This activation pattern is consistent with the event related potential signal generated by the Oddball paradigm. The pattern for the individual subjects also follow the average pattern of high activation in the frontal and occipital region for target and non target stimuli. We have also used cross correlation (a classical connectivity method) for comparison of the results. A subjective comparison of the results show that our proposed method is capable of tracking the EEG activation. https://authors.elsevier.com/a/1W3~~6DBR2plqr
Article
Despite substantial recent progress, our understanding of the principles and mechanisms underlying complex brain function and cognition remains incomplete. Network neuroscience proposes to tackle these enduring challenges. Approaching brain structure and function from an explicitly integrative perspective, network neuroscience pursues new ways to map, record, analyze and model the elements and interactions of neurobiological systems. Two parallel trends drive the approach: the availability of new empirical tools to create comprehensive maps and record dynamic patterns among molecules, neurons, brain areas and social systems; and the theoretical framework and computational tools of modern network science. The convergence of empirical and computational advances opens new frontiers of scientific inquiry, including network dynamics, manipulation and control of brain networks, and integration of network processes across spatiotemporal domains. We review emerging trends in network neuroscience and attempt to chart a path toward a better understanding of the brain as a multiscale networked system.
Article
Although promising from numerous applications, current Brain-Computer Interfaces (BCIs) still suffer from a number of limitations. In particular, they are sensitive to noise, outliers and the non-stationarity of ElectroEncephaloGraphic (EEG) signals, they require long calibration times and are not reliable. Thus, new approaches and tools, notably at the EEG signal processing and classification level, are necessary to address these limitations. Riemannian approaches, spearheaded by the use of covariance matrices, are such a very promising tool slowly adopted by a growing number of researchers. This article, after a quick introduction to Riemannian geometry and a presentation of the BCI-relevant manifolds, reviews how these approaches have been used for EEG-based BCI, in particular for feature representation and learning, classifier design and calibration time reduction. Finally, relevant challenges and promising research directions for EEG signal classification in BCIs are identified, such as feature tracking on manifold or multi-task learning.
Article
We address the problem of identifying a graph structure from the observation of signals defined on its nodes. Fundamentally, the unknown graph encodes direct relationships between signal elements, which we aim to recover from observable indirect relationships generated by a diffusion process on the graph. The fresh look advocated here permeates benefits from convex optimization and stationarity of graph signals, in order to identify the graph shift operator (a matrix representation of the graph) given only its eigenvectors. These spectral templates can be obtained, e.g., from the sample covariance of independent graph signals diffused on the sought network. The novel idea is to find a graph shift that, while being consistent with the provided spectral information, endows the network with certain desired properties such as sparsity. To that end we develop efficient inference algorithms stemming from provably-tight convex relaxations of natural nonconvex criteria, particularizing the results for two shifts: the adjacency matrix and the normalized Laplacian. Algorithms and theoretical recovery conditions are developed not only when the templates are perfectly known, but also when the eigenvectors are noisy or when only a subset of them are given. Numerical tests showcase the effectiveness of the proposed algorithms in recovering social, brain, and amino-acid networks.
Article
Studying interactions using resting-state functional magnetic resonance imaging (fMRI) signals between discrete brain loci is increasingly recognized as important for understanding normal brain function and may provide insights into many neurodegenerative disorders such as Parkinson's disease (PD). Though much work has been done investigating ways to infer brain connectivity networks, the temporal dynamics of brain coupling has been less well studied. Assuming that brain connections are purely static or purely dynamic is assuredly unrealistic, as the brain must strike a balance between stability and flexibility. In this paper, we propose making joint inference of time-invariant connections as well as time-varying coupling patterns by employing a multitask learning model followed by a least-squares approach to accurately estimate the connectivity coefficients. We applied this method to resting state fMRI data from PD and control subjects and estimated the eigenconnectivity networks to obtain the representative patterns of both static and dynamic brain connectivity features. We found lower network variations in the PD group, which were partially normalized with L-dopa medication, consistent with previous studies suggesting that cognitive inflexibility is characteristic of PD.
Chapter
Online learning has been at the center of focus in signal processing learning tasks for many decades, since the early days of the LMS and Kalman filtering and it has already found its place in a wide range of diverse applications and practical systems. More recently, together with the birth of new disciplines, such as information retrieval and bioinformatics, a new necessity for online learning techniques has emerged. The number of the available data as well as the dimensionality of the involved spaces can become excessively large for what batch processing techniques can cope with. In batch processing, all data have to be known prior to the start of the learning process and have to be stored in the memory. This is also true for batch techniques that use the data sequentially. In the online techniques to be described in this chapter, every data point is used only for a limited number of times. Besides their computational advantages, such techniques can easily accommodate modifications so that to deal with time varying statistics and produce estimates that can adapt to such variations. This is the reason that such techniques are also known as adaptive or time-adaptive techniques, especially in the signal processing community jargon.
Article
Major depressive disorder (MDD) is characterized by abnormal resting-state functional connectivity (RSFC), especially in medial prefrontal cortical (MPFC) regions of the default network. However, prior research in MDD has not examined dynamic changes in functional connectivity as networks form, interact, and dissolve over time. We compared unmedicated individuals with MDD (n=100) to control participants (n=109) on dynamic RSFC (operationalized as standard deviation in RSFC over a series of sliding windows) of an MPFC seed region during a resting-state functional magnetic resonance imaging scan. Among participants with MDD, we also investigated the relationship between symptom severity and RSFC. Secondary analyses probed the association between dynamic RSFC and rumination. Results showed that individuals with MDD were characterized by decreased dynamic (less variable) RSFC between MPFC and regions of parahippocampal gyrus within the default network, a pattern related to sustained positive connectivity between these regions across sliding windows. In contrast, the MDD group exhibited increased dynamic (more variable) RSFC between MPFC and regions of insula, and higher severity of depression was related to increased dynamic RSFC between MPFC and dorsolateral prefrontal cortex. These patterns of highly variable RSFC were related to greater frequency of strong positive and negative correlations in activity across sliding windows. Secondary analyses indicated that increased dynamic RSFC between MPFC and insula was related to higher levels of recent rumination. These findings provide initial evidence that depression, and ruminative thinking in depression, are related to abnormal patterns of fluctuating communication among brain systems involved in regulating attention and self-referential thinking.Neuropsychopharmacology accepted article preview online, 03 December 2015. doi:10.1038/npp.2015.352.
Article
Recent years have witnessed a rapid growth of interest in moving functional magnetic resonance imaging (fMRI) beyond simple scan-length averages and into approaches that capture time-varying properties of connectivity. In this Perspective we use the term "chronnectome" to describe metrics that allow a dynamic view of coupling. In the chronnectome, coupling refers to possibly time-varying levels of correlated or mutually informed activity between brain regions whose spatial properties may also be temporally evolving. We primarily focus on multivariate approaches developed in our group and review a number of approaches with an emphasis on matrix decompositions such as principle component analysis and independent component analysis. We also discuss the potential these approaches offer to improve characterization and understanding of brain function. There are a number of methodological directions that need to be developed further, but chronnectome approaches already show great promise for the study of both the healthy and the diseased brain.
Article
Functional connectivity measured from resting state fMRI (R-fMRI) data has been widely used to examine the brain's functional activities and has been recently used to characterize and differentiate brain conditions. However, the dynamical transition patterns of the brain's functional states have been less explored. In this work, we propose a novel computational framework to quantitatively characterize the brain state dynamics via hidden Markov models (HMMs) learned from the observations of temporally dynamic functional connectomics, denoted as functional connectome states. The framework has been applied to the R-fMRI dataset including 44 post-traumatic stress disorder (PTSD) patients and 51 normal control (NC) subjects. Experimental results show that both PTSD and NC brains were undergoing remarkable changes in resting state and mainly transiting amongst a few brain states. Interestingly, further prediction with the best-matched HMM demonstrates that PTSD would enter into, but could not disengage from, a negative mood state. Importantly, 84 % of PTSD patients and 86 % of NC subjects are successfully classified via multiple HMMs using majority voting.
Article
Humans spend much of their time engaged in stimulus-independent thoughts, colloquially known as "daydreaming" or "mind-wandering." A fundamental question concerns how awake, spontaneous brain activity represents the ongoing cognition of daydreaming versus unconscious processes characterized as "intrinsic." Since daydreaming involves brief cognitive events that spontaneously fluctuate, we tested the hypothesis that the dynamics of brain network functional connectivity (FC) are linked with daydreaming. We determined the general tendency to daydream in healthy adults based on a daydreaming frequency scale (DDF). Subjects then underwent both resting state functional magnetic resonance imaging (rs-fMRI) and fMRI during sensory stimulation with intermittent thought probes to determine the occurrences of mind-wandering events. Brain regions within the default mode network (DMN), purported to be involved in daydreaming, were assessed for 1) static FC across entire fMRI scans, and 2) dynamic FC based on FC variability (FCV) across 30s progressively sliding windows of 2s increments within each scan. We found that during both resting and sensory stimulation states, individual differences in DDF were negatively correlated with static FC between the posterior cingulate cortex and a ventral DMN subsystem involved in future-oriented thought. Dynamic FC analysis revealed that DDF was positively correlated with FCV within the same DMN subsystem in the resting state but not during stimulation. However, dynamic but not static FC, in this subsystem was positively correlated with an individual's degree of self-reported mind-wandering during sensory stimulation. These findings identify temporal aspects of spontaneous DMN activity that reflect conscious and unconscious processes.
Article
Modeling of resting state functional magnetic resonance imaging (rs-fMRI) data using network models is of increasing interest. It is often desirable to group nodes into clusters to interpret the communication patterns between nodes. In this study we consider three different nonparametric Bayesian models for node clustering in complex networks. In particular, we test their ability to predict unseen data and their ability to reproduce clustering across datasets. The three generative models considered are the Infinite Relational Model (IRM), Bayesian Community Detection (BCD), and the Infinite Diagonal Model (IDM). The models define probabilities of generating links within and between clusters and the difference between the models lie in the restrictions they impose upon the between-cluster link probabilities. IRM is the most flexible model with no restrictions on the probabilities of links between clusters. BCD restricts the between-cluster link probabilities to be strictly lower than within-cluster link probabilities to conform to the community structure typically seen in social networks. IDM only models a single between-cluster link probability, which can be interpreted as a background noise probability. These probabilistic models are compared against three other approaches for node clustering, namely Infomap, Louvain modularity, and hierarchical clustering. Using 3 different datasets comprising healthy volunteers' rs-fMRI we found that the BCD model was in general the most predictive and reproducible model. This suggests that rs-fMRI data exhibits community structure and furthermore points to the significance of modeling heterogeneous between-cluster link probabilities.
Article
Recent work on both task-induced and resting-state functional magnetic resonance imaging (fMRI) data suggests that functional connectivity may fluctuate, rather than being stationary during an entire scan. Most dynamic studies are based on second-order statistics between fMRI time series or time courses derived from blind source separation, e.g., independent component analysis (ICA), to investigate changes of temporal interactions among brain regions. However, fluctuations related to spatial components over time are of interest as well. In this paper, we examine higher-order statistical dependence between pairs of spatial components, which we define as spatial functional network connectivity (sFNC), and changes of sFNC across a resting-state scan. We extract time-varying components from healthy controls and patients with schizophrenia to represent brain networks using independent vector analysis (IVA), which is an extension of ICA to multiple data sets and enables one to capture spatial variations. Based on mutual information among IVA components, we perform statistical analysis and Markov modeling to quantify the changes in spatial connectivity. Our experimental results suggest significantly more fluctuations in patient group and show that patients with schizophrenia have more variable patterns of spatial concordance primarily between frontoparietal, cerebellum and temporal lobe regions. This study extends upon earlier studies showing temporal connectivity differences in similar areas on average by providing evidence that the dynamic spatial interplay between these regions is also impacted by schizophrenia.
Article
The discovery of evolving communities in dynamic networks is an important research topic that poses challenging tasks. Evolutionary clustering is a recent framework for clustering dynamic networks that introduces the concept of temporal smoothness inside the community structure detection method. Evolutionary-based clustering approaches try to maximize cluster accuracy with respect to incoming data of the current time step, and minimize clustering drift from one time step to the successive one. In order to optimize both these two competing objectives, an input parameter that controls the preference degree of a user towards either the snapshot quality or the temporal quality is needed. In this paper the detection of communities with temporal smoothness is formulated as a multiobjective problem and a method based on genetic algorithms is proposed. The main advantage of the algorithm is that it automatically provides a solution representing the best trade-off between the accuracy of the clustering obtained, and the deviation from one time step to the successive. Experiments on synthetic data sets show the very good performance of the method when compared with state-of-the-art approaches.
Article
The study of extracting electroncephalogram (EEG) data as a source of significant information has recently gained attention. However, since EEG data are complex, it is difficult to extract them as a source of intended, significant information. In order to effectively extract EEG data, this paper employs the maximum entropy method (MEM) for frequency analyses and investigates an alpha frequency band and beta frequency into in which features are more apparent. At this time, both the alpha and beta frequency bands are divided further into several sub-bands so as to extract detailed the EEG data where the loss of data is small. In addition, learning vector quantization (LVQ) is used for clustering the EEG data with features extracted. In this paper, we will demonstrate the effectiveness of the proposed method studies. By applying the proposed method further to the EEG data of three subjects, and comparing the results with related studies, the effectiveness of the proposed method will be determined.
Article
Diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) have been widely used to study structural and functional brain connectivity in recent years. A common assumption used in many previous functional brain connectivity studies is the temporal stationarity. However, accumulating literature evidence has suggested that functional brain connectivity is under temporal dynamic changes in different time scales. In this paper, a novel and intuitive approach is proposed to model and detect dynamic changes of functional brain states based on multimodal fMRI/DTI data. The basic idea is that functional connectivity patterns of all fiber-connected cortical voxels are concatenated into a descriptive functional feature vector to represent the brain's state, and the temporal change points of brain states are decided by detecting the abrupt changes of the functional vector patterns via the sliding window approach. Our extensive experimental results have shown that meaningful brain state change points can be detected in task-based fMRI/DTI, resting state fMRI/DTI, and natural stimulus fMRI/DTI data sets. Particularly, the detected change points of functional brain states in task-based fMRI corresponded well to the external stimulus paradigm administered to the participating subjects, thus partially validating the proposed brain state change detection approach. The work in this paper provides novel perspective on the dynamic behaviors of functional brain connectivity and offers a starting point for future elucidation of the complex patterns of functional brain interactions and dynamics.
Article
In this paper, we propose a Multi-Manifold Discriminant Analysis (MMDA) method for an image feature extraction and pattern recognition based on graph embedded learning and under the Fisher discriminant analysis framework. In an MMDA, the within-class graph and between-class graph are, respectively, designed to characterize the within-class compactness and the between-class separability, seeking for the discriminant matrix to simultaneously maximize the between-class scatter and minimize the within-class scatter. In addition, in an MMDA, the within-class graph can represent the sub-manifold information, while the between-class graph can represent the multi-manifold information. The proposed MMDA is extensively examined by using the FERET, AR and ORL face databases, and the PolyU finger-knuckle-print databases. The experimental results demonstrate that an MMDA is effective in feature extraction, leading to promising image recognition performance.
Article
Structure-function studies of neuronal networks have recently benefited from considerable progress in different areas of investigation. Advances in molecular genetics and imaging have allowed for the dissection of neuronal connectivity with unprecedented detail whereas in vivo recordings are providing much needed clues as to how sensory, motor and cognitive function is encoded in neuronal firing. However, bridging the gap between the cellular and behavioral levels will ultimately require an understanding of the functional organization of the underlying neuronal circuits. One way to unravel the complexity of neuronal networks is to understand how their connectivity emerges during brain maturation. In this review, we will describe how graph theory provides experimentalists with novel concepts that can be used to describe and interpret these developing connectivity schemes.
Article
A Hilbert space embedding for probability measures has recently been proposed, with applications including dimensionality reduction, homogeneity testing, and independence testing. This embedding represents any probability measure as a mean element in a reproducing kernel Hilbert space (RKHS). A pseudometrie on the space of probability measures can be defined as the distance between distribution embeddings: we denote this as γk, indexed by the kernel function k that defines the inner product in the RKHS. We present three theoretical properties of γk. First, we consider the question of determining the conditions on the kernel k for which γk is a metric: such k are denoted characteristic kernels. Unlike pseudometrics, a metric is zero only when two distributions coincide, thus ensuring the RKHS embedding maps all distributions uniquely (i.e., the embedding is injective). While previously published conditions may apply only in restricted circumstances (e.g., on compact domains), and are difficult to check, our conditions are straightforward and intuitive: integrally strictly positive definite kernels are characteristic. Alternatively, if a bounded continuous kernel is translation-invariant on ℝd, then it is characteristic if and only if the support of its Fourier transform is the entire ℝd. Second, we show that the distance between distributions under γk results from an interplay between the properties of the kernel and the distributions, by demonstrating that distributions are close in the embedding space when their differences occur at higher frequencies. Third, to understand the nature of the topology induced by γk, we relate γk to other popular metrics on probability measures, and present conditions on the kernel k under which γk metrizes the weak topology. ©2010 Bharath K. Sriperumbudur, Arthur Gretton, Kenji Fukumizu, Bernhard Schölkopf and Gert R. G. Lanckriet.