Article

Intrinsic Stability of Temporally Shifted Spike-Timing Dependent Plasticity

Department of Neuroscience, Columbia University, Center for Theoretical Neuroscience, New York, New York, United States of America.
PLoS Computational Biology (Impact Factor: 4.83). 11/2010; 6(11):e1000961. DOI: 10.1371/journal.pcbi.1000961
Source: PubMed

ABSTRACT Spike-timing dependent plasticity (STDP), a widespread synaptic modification mechanism, is sensitive to correlations between presynaptic spike trains and it generates competition among synapses. However, STDP has an inherent instability because strong synapses are more likely to be strengthened than weak ones, causing them to grow in strength until some biophysical limit is reached. Through simulations and analytic calculations, we show that a small temporal shift in the STDP window that causes synchronous, or nearly synchronous, pre- and postsynaptic action potentials to induce long-term depression can stabilize synaptic strengths. Shifted STDP also stabilizes the postsynaptic firing rate and can implement both Hebbian and anti-Hebbian forms of competitive synaptic plasticity. Interestingly, the overall level of inhibition determines whether plasticity is Hebbian or anti-Hebbian. Even a random symmetric jitter of a few milliseconds in the STDP window can stabilize synaptic strengths while retaining these features. The same results hold for a shifted version of the more recent "triplet" model of STDP. Our results indicate that the detailed shape of the STDP window function near the transition from depression to potentiation is of the utmost importance in determining the consequences of STDP, suggesting that this region warrants further experimental study.

1 Follower
  • Source
    • "Similar findings about 791 this inversed Hebbian plasticity rule have been reported in previous in vitro 792 (excitatory-inhibitory neuronal couplings), animal and computational studies 793 only (Bell et al. 1993; Caporale and Dan 2008; Harvey-Girard et al. 2010; 794 Roberts and Leen 2010). In these studies, the overall level of inhibition seems 795 to play an important role for the induction of either Hebbian or anti-Hebbian 796 plasticity (Babadi and Abbott 2010), and it has been linked to adaptive filtering 797 functions in the sensory domain (for reference see (Requarth and Sawtell 798 2011)). 799 "
    [Show abstract] [Hide abstract]
    ABSTRACT: Spike timing-dependent plasticity (STDP) has been proposed as one of the key mechanisms underlying learning and memory. Repetitive median nerve stimulation (MNS) followed by transcranial magnetic stimulation (TMS) of the contralateral primary motor cortex (M1), defined as paired associative stimulation (PAS), has been used as an in vivo model of STDP in humans. PAS-induced excitability changes in M1 have been repeatedly shown to be time-dependent in an STDP-like fashion since synchronous arrival of inputs within M1 induces long term potentiation (LTP)-like effects while an asynchronous arrival induces long term depression (LTD)-like effects. Here we show that interhemispheric inhibition of the sensorimotor network during PAS with the peripheral stimulation over the hand ipsilateral to the motor cortex receiving TMS (ipsi-PAS) results in an LTD-like effect, as opposed to the standard STDP-like effect seen for contralateral PAS. Furthermore, we could show that this reversed associative plasticity critically depends on the timing interval between afferent and cortical stimulation. These results indicate that the outcome of associative stimulation in the human brain depends on functional network interactions (inhibition or facilitation) at a systems level and can either follow standard or reversed STDP-like plasticity.
    Journal of Neurophysiology 02/2013; 109(9). DOI:10.1152/jn.01004.2012 · 3.04 Impact Factor
  • Source
    • "Equation 3 is suggestive of a fast dynamics where synaptic change takes place on the temporal scale of the firing rate dynamics. It has been shown very recently, however, that such dynamics leads to synaptic instability unless some mechanisms for boundedness or delays are taken into account (Babadi and Abbott 2010). Although this has been shown for the case of spiking dynamics, the same also holds for fast dynamics in the context of neural field equations: Synchronized activation will lead to stronger synaptic weight, which in turn will result in more correlated activity and still higher synaptic change. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We introduce a modified-firing-rate model based on Hebbian-type changing synaptic connections. The existence and stability of solutions such as rest state, bumps, and traveling waves are shown for this type of model. Three types of kernels, namely exponential, Mexican hat, and periodic synaptic connections, are considered. In the former two cases, the existence of a rest state solution is proved and the conditions for their stability are found. Bump solutions are shown for two kinds of synaptic kernels, and their stability is investigated by constructing a corresponding Evans function that holds for a specific range of values of the kernel coefficient strength (KCS). Applying a similar method, we consider exponential synaptic connections, where traveling wave solutions are shown to exist. Simulation and numerical analysis are presented for all these cases to illustrate the resulting solutions and their stability.
    Biological Cybernetics 03/2012; 106(1):15-26. DOI:10.1007/s00422-012-0475-9 · 1.93 Impact Factor
  • Source
    • "(T.K. Leen). 1993; Hansen, 1993; Orr & Leen, 1993; Radons, 1993) and to equilibrium distributions in biological learning (Babadi & Abbott, 2010; Câteau & Fukai, 2003; Kepecs, van Rossum, Song, & Tegney, 2002; Masuda & Aihara, 2004; Rubin, Lee, & Sompolinsky, 2001; van Rossum, Bi, & Turrigiano, 2000; Zhu, Lai, Hoppensteadt, & He, 2006). Despite its utility, adopting the FPE to approximate jump processes is not well-justified. "
    [Show abstract] [Hide abstract]
    ABSTRACT: On-line machine learning algorithms, many biological spike-timing-dependent plasticity (STDP) learning rules, and stochastic neural dynamics evolve by Markov processes. A complete description of such systems gives the probability densities for the variables. The evolution and equilibrium state of these densities are given by a Chapman-Kolmogorov equation in discrete time, or a master equation in continuous time. These formulations are analytically intractable for most cases of interest, and to make progress a nonlinear Fokker-Planck equation (FPE) is often used in their place. The FPE is limited, and some argue that its application to describe jump processes (such as in these problems) is fundamentally flawed. We develop a well-grounded perturbation expansion that provides approximations for both the density and its moments. The approach is based on the system size expansion in statistical physics (which does not give approximations for the density), but our simple development makes the methods accessible and invites application to diverse problems. We apply the method to calculate the equilibrium distributions for two biologically-observed STDP learning rules and for a simple nonlinear machine-learning problem. In all three examples, we show that our perturbation series provides good agreement with Monte-Carlo simulations in regimes where the FPE breaks down.
    Neural networks: the official journal of the International Neural Network Society 02/2012; 32:219-28. DOI:10.1016/j.neunet.2012.02.006 · 2.08 Impact Factor
Show more