Article

Inferring evoked brain connectivity through adaptive perturbation.

Department of Mathematics & Statistics, Boston University, Boston, MA, 02215, USA, .
Journal of Computational Neuroscience (Impact Factor: 2.09). 09/2012; DOI: 10.1007/s10827-012-0422-8
Source: PubMed

ABSTRACT Inference of functional networks-representing the statistical associations between time series recorded from multiple sensors-has found important applications in neuroscience. However, networksexhibiting time-locked activity between physically independent elements can bias functional connectivity estimates employing passive measurements. Here, a perturbative and adaptive method of inferring network connectivity based on measurement and stimulation-so called "evoked network connectivity" is introduced. This procedure, employing a recursive Bayesian update scheme, allows principled network stimulation given a current network estimate inferred from all previous stimulations and recordings. The method decouples stimulus and detector design from network inference and can be suitably applied to a wide range of clinical and basic neuroscience related problems. The proposed method demonstrates improved accuracy compared to network inference based on passive observation of node dynamics and an increased rate of convergence relative to network estimation employing a naïve stimulation strategy.

0 Followers
 · 
88 Views
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: In this paper, we propose a novelty-based metric for quantitative characterization of the controllability of complex networks. This inherently bounded metric describes the average angular separation of an input with respect to the past input history. We use this metric to find the minimally novel input that drives a linear network to a desired state using unit average energy. Specifically, the minimally novel input is defined as the solution of a continuous time, non-convex optimal control problem based on the introduced metric. We provide conditions for existence and uniqueness, and an explicit, closed-form expression for the solution. We support our theoretical results by characterizing the minimally novel inputs for an example of a recurrent neuronal network.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: This paper introduces a framework for quantitative characterization of the controllability of time-varying linear systems (or networks) in terms of input novelty. The motivation for such an approach comes from the study of biophysical sensory networks in the brain, wherein responsiveness to both energy and salience (or novelty) are presumably critical for mediating behavior and function. Here, we use an inner product to define the angular separation of the current input with respect to the past input history. Then, by constraining input energy, we define a non-convex optimal control problem to obtain the minimally novel input that effects a given state transfer. We provide analytical conditions for existence and uniqueness in continuous-time, as well as an explicit closed-form expression for the solution. In discrete time, we show that a relaxed convex optimization formulation provides the global optimal solution of the original non-convex problem. Finally, we show how the minimum novelty control can be used as a metric to study control properties of large scale recurrent neuronal networks and other complex linear systems. In particular, we highlight unique aspects of a system's controllability that are captured through the novelty-based metric. The result suggests that a multifaceted approach, combining energy-based analysis with other specific metrics, may be useful for obtaining more complete controllability characterizations.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g., serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape these transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity (STDP) rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS) in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.
    Frontiers in Computational Neuroscience 01/2014; 8:5. DOI:10.3389/fncom.2014.00005 · 2.23 Impact Factor