Intrinsic Stability of Temporally Shifted Spike-Timing Dependent Plasticity

Department of Neuroscience, Columbia University, Center for Theoretical Neuroscience, New York, New York, United States of America.
PLoS Computational Biology (Impact Factor: 4.62). 11/2010; 6(11):e1000961. DOI: 10.1371/journal.pcbi.1000961
Source: PubMed


Spike-timing dependent plasticity (STDP), a widespread synaptic modification mechanism, is sensitive to correlations between presynaptic spike trains and it generates competition among synapses. However, STDP has an inherent instability because strong synapses are more likely to be strengthened than weak ones, causing them to grow in strength until some biophysical limit is reached. Through simulations and analytic calculations, we show that a small temporal shift in the STDP window that causes synchronous, or nearly synchronous, pre- and postsynaptic action potentials to induce long-term depression can stabilize synaptic strengths. Shifted STDP also stabilizes the postsynaptic firing rate and can implement both Hebbian and anti-Hebbian forms of competitive synaptic plasticity. Interestingly, the overall level of inhibition determines whether plasticity is Hebbian or anti-Hebbian. Even a random symmetric jitter of a few milliseconds in the STDP window can stabilize synaptic strengths while retaining these features. The same results hold for a shifted version of the more recent "triplet" model of STDP. Our results indicate that the detailed shape of the STDP window function near the transition from depression to potentiation is of the utmost importance in determining the consequences of STDP, suggesting that this region warrants further experimental study.

1 Follower
9 Reads
  • Source
    • ", 2007 ) seems to suggest a more multiplicative type rule ( van Rossum et al . , 2000 ; Babadi and Abbott , 2010 ) . Together , these results suggest that the amounts of homogenization and competition expressed by Hebbian mechanisms may vary from area - to - area , or maybe developmentally regulated , depending on the "
    [Show abstract] [Hide abstract]
    ABSTRACT: The majority of distinct sensory and motor events occur as temporally ordered sequences with rich probabilistic structure. Sequences can be characterized by the probability of transitioning from the current state to upcoming states (forward probability), as well as the probability of having transitioned to the current state from previous states (backward probability). Despite the prevalence of probabilistic sequencing of both sensory and motor events, the Hebbian mechanisms that mold synapses to reflect the statistics of experienced probabilistic sequences are not well understood. Here, we show through analytic calculations and numerical simulations that Hebbian plasticity (correlation, covariance, and STDP) with pre-synaptic competition can develop synaptic weights equal to the conditional forward transition probabilities present in the input sequence. In contrast, post-synaptic competition can develop synaptic weights proportional to the conditional backward probabilities of the same input sequence. We demonstrate that to stably reflect the conditional probability of a neuron's inputs and outputs, local Hebbian plasticity requires balance between competitive learning forces that promote synaptic differentiation and homogenizing learning forces that promote synaptic stabilization. The balance between these forces dictates a prior over the distribution of learned synaptic weights, strongly influencing both the rate at which structure emerges and the entropy of the final distribution of synaptic weights. Together, these results demonstrate a simple correspondence between the biophysical organization of neurons, the site of synaptic competition, and the temporal flow of information encoded in synaptic weights by Hebbian plasticity while highlighting the utility of balancing learning forces to accurately encode probability distributions, and prior expectations over such probability distributions.
    Frontiers in Computational Neuroscience 08/2015; 9:92. DOI:10.3389/fncom.2015.00092 · 2.20 Impact Factor
  • Source
    • "Long-term stability can be problematic for correlative learning rules (e.g., Figure 5C), since bounded Hebbian synapses destabilize plastic networks by maximally potentiating or depressing synapses. Additional mechanisms such as weight-dependent weight changes (van Rossum et al., 2000) or fine tuning of window parameters (Kempter and Gerstner, 2001; Babadi and Abbott, 2010) have been shown to be able to keep weights in check. In contrast, owing to its plasticity dynamics during on-line probability estimation, spike-based BCPNN naturally demonstrated weight dependence (Figure 5B) along with a stable unimodal equilibrium weight distribution when exposed to prolonged uncorrelated stimulation. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The brain stores and retrieves information by initiating cascades of molecular changes that lead to a diverse repertoire of dynamical phenomena at higher levels of processing. Hebbian plasticity, neuromodulation, and homeostatic synaptic and intrinsic excitability all conspire to form and maintain memories. But it is still unclear how these seemingly redundant mechanisms could jointly orchestrate learning in a more unified system. To address this, we propose a Hebbian learning rule for spiking neurons inspired by Bayesian statistics. Synaptic weights and intrinsic currents are adapted on-line upon arrival of single spikes, which initiate a cascade of temporally interacting memory traces that locally estimate probabilities associated with relative neuronal activation levels. We show that the dynamics of these traces readily demonstrate a spike-timing dependence that stably returns to a set-point over long time scales, and that synaptic learning remains competitive despite this stability. Beyond unsupervised learning, we show how linking the traces with an externally driven signal could enable spike-based reinforcement learning. Neuronally, the traces are represented by an activity-dependent ion channel that is shown to regulate the input received by a postsynaptic cell and generate intrinsic graded persistent firing levels. We perform spike-based Bayesian learning in a simulated inference task using integrate and fire neurons that are Poisson-firing and fluctuation-driven, similar to the preferred regime of cortical neurons. Our results support the view that neurons can represent information in the form of probability distributions and that probabilistic inference can be a functional by-product of coupled synaptic and nonsynaptic mechanisms operating over several timescales. The model provides a biophysical realization of Bayesian computation by reconciling several observed neural phenomena whose functional effects are only partially understood in concert.
    Frontiers in Synaptic Neuroscience 04/2014; 6(8). DOI:10.3389/fnsyn.2014.00008
  • Source
    • "Current-based LIF models are popular because of their relative simplicity (see e.g., Brunel, 2013) and they have the key advantage of facilitating the derivation of analytical closed-form solutions. Thus current-based synapses are convenient for developing mean field models (Grabska-Barwinska and Latham, 2013), event-based models (Touboul and Faugeras, 2011), or firing rate models (Helias et al., 2010; Ostojic and Brunel, 2011; Schaffer et al., 2013), as well as in studies examining the stability of neural states (Babadi and Abbott, 2010; Mongillo et al., 2012). Moreover, current-based models are often adopted, because of their simplicity, to investigate numerically network-scale phenomena (Memmesheimer, 2010; Renart and Van Rossum, 2012; Gutig et al., 2013; Lim and Goldman, 2013; Zhang et al., 2013). "
    [Show abstract] [Hide abstract]
    ABSTRACT: Models of networks of Leaky Integrate-and-Fire (LIF) neurons are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single neuron and neural population dynamics of conductance-based networks (COBNs) and current-based networks (CUBNs) of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity). However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-modulated in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, the network activity of COBN showed stronger synchronization in the gamma band, and spectral information about the input higher and spread over a broader range of frequencies. These results suggest that the second order statistics of network dynamics depend strongly on the choice of synaptic model.
    Frontiers in Neural Circuits 03/2014; 8:12. DOI:10.3389/fncir.2014.00012 · 3.60 Impact Factor
Show more

Preview (2 Sources)

9 Reads
Available from