Inferring evoked brain connectivity through adaptive perturbation
ABSTRACT Inference of functional networks-representing the statistical associations between time series recorded from multiple sensors-has found important applications in neuroscience. However, networksexhibiting time-locked activity between physically independent elements can bias functional connectivity estimates employing passive measurements. Here, a perturbative and adaptive method of inferring network connectivity based on measurement and stimulation-so called "evoked network connectivity" is introduced. This procedure, employing a recursive Bayesian update scheme, allows principled network stimulation given a current network estimate inferred from all previous stimulations and recordings. The method decouples stimulus and detector design from network inference and can be suitably applied to a wide range of clinical and basic neuroscience related problems. The proposed method demonstrates improved accuracy compared to network inference based on passive observation of node dynamics and an increased rate of convergence relative to network estimation employing a naïve stimulation strategy.
- SourceAvailable from: Shinung Ching
[Show abstract] [Hide abstract]
- "Indeed, a weak, but highly novel input may be more easily perceived than an intense, but more familiar, stimulus. The ability to assess the responsiveness of neuronal networks to novelty – at a particular moment in time, relative to past inputs – has immediate implications in the analysis and control of biophysiological neuronal network dynamics in different behavioral and clinical regimes –. Here, as a first step, we seek to characterize the controllability of linear systems (linear networks) possessing high dimensional input-spaces, with respect to input novelty. "
ABSTRACT: In this paper, we propose a novelty-based metric for quantitative characterization of the controllability of complex networks. This inherently bounded metric describes the average angular separation of an input with respect to the past input history. We use this metric to find the minimally novel input that drives a linear network to a desired state using unit average energy. Specifically, the minimally novel input is defined as the solution of a continuous time, non-convex optimal control problem based on the introduced metric. We provide conditions for existence and uniqueness, and an explicit, closed-form expression for the solution. We support our theoretical results by characterizing the minimally novel inputs for an example of a recurrent neuronal network.
[Show abstract] [Hide abstract]
- "Alternatively, methods using perturbation to infer the underlying connectivity of a network may be limited by similar constraints as those seen here. A recent study by Lepage et al. (2012) used mean-field approximations of cortical columnar processing to give a proof of concept for such a perturbation-based circuit mapping method. As opposed to neuroanatomical studies, such methods capture the functional connectivity of the network, which is inherently biased by the ongoing activity in a circuit. "
ABSTRACT: Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g., serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape these transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity (STDP) rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS) in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.Frontiers in Computational Neuroscience 01/2014; 8:5. DOI:10.3389/fncom.2014.00005 · 2.23 Impact Factor
[Show abstract] [Hide abstract]
- "Such a problem is relevant to applications in social networks    and neuroscience . In , such edge structure was identified in a neuronal network using an adaptive Bayesian inferential method. There, a central assumption was that only a single node could be excited at a given time to avoid the interference confound (i.e., determining from where an evoked response originated). "
ABSTRACT: The discrete prolate spheroidal sequences (DPSSs) - a set of optimally bandlimited sequences with unique properties - are important to applications in both science and engineering. In this work, properties of nonlinear system response due to DPSS excitation are reported. In particular, this output is shown to be approximately orthogonal after passing through a nonlinear, multiple-input multiple-output system with memory under quite general conditions. This work quantifies these conditions in terms of constraints upon the higher-order generalized transfer functions characterizing the Volterra expansion of a MIMO system, the Volterra order of the system, and the DPSS bandwidth parameter $W$ and time-bandwidth parameter $NW$. The approximate system output orthogonality allows multiple-input, multiple-output parameter identification of edge structure in interconnected nonlinear systems using simultaneous, DPSS excitation. This narrowband method of system identification is particularly appealing when compared to classical broadband system excitation in sensitive, neural engineering applications involving electrical stimulation of multiple brain regions. The utility of this work is demonstrated in simulated parameter identification in a nonlinear, biophysical model of multi-site neural stimulation.