ArticlePDF Available

Contrasting the effects of adaptation and synaptic filtering on the timescales of dynamics in recurrent networks

Authors:

Abstract

Neural activity in awake behaving animals exhibits a vast range of timescales that can be several fold larger than the membrane time constant of individual neurons. Two types of mechanisms have been proposed to explain this conundrum. One possibility is that large timescales are generated by a network mechanism based on positive feedback, but this hypothesis requires fine-tuning of the strength or structure of the synaptic connections. A second possibility is that large timescales in the neural dynamics are inherited from large timescales of underlying biophysical processes, two prominent candidates being intrinsic adaptive ionic currents and synaptic transmission. How the timescales of adaptation or synaptic transmission influence the timescale of the network dynamics has however not been fully explored. To address this question, here we analyze large networks of randomly connected excitatory and inhibitory units with additional degrees of freedom that correspond to adaptation or synaptic filtering. We determine the fixed points of the systems, their stability to perturbations and the corresponding dynamical timescales. Furthermore, we apply dynamical mean field theory to study the temporal statistics of the activity in the fluctuating regime, and examine how the adaptation and synaptic timescales transfer from individual units to the whole population. Our overarching finding is that synaptic filtering and adaptation in single neurons have very different effects at the network level. Unexpectedly, the macroscopic network dynamics do not inherit the large timescale present in adaptive currents. In contrast, the timescales of network activity increase proportionally to the time constant of the synaptic filter. Altogether, our study demonstrates that the timescales of different biophysical processes have different effects on the network level, so that the slow processes within individual neurons do not necessarily induce slow activity in large recurrent neural networks.
A preview of the PDF is not available
... Dynamic mean-field theory (DMFT) [10] makes microscopic observables accessible because, instead of coarse-graining the activity of the neurons, it coarse-grains their input. This has led to significant insights into the interrelation between network structure and intrinsic timescales for recurrent networks of (non-spiking) rate neurons [10][11][12][13][14][15]. In particular, it has been shown that very slow intrinsic timescales emerge close to a transition to chaos in autonomous networks [10]. ...
... Interestingly, simply adding a noisy input to the network significantly reduces this effect and even leads to a novel dynamical regime [13]. Furthermore, increasing the complexity of the single-neuron dynamics leads to the counter-intuitive result that timescales of slow adaptive currents are not straightforwardly expressed in the network dynamics [14], and to yet another dynamical regime termed "resonant chaos" [15]. In combination, these results suggest that the mechanisms shaping the intrinsic timescales in recurrent networks are highly involved. ...
... We developed a theory that directly links the network structure of spiking network models to the emergent timescales on the level of individual neurons. To this end, we extended the results from dynamic mean-field theory for fully connected networks of (non-spiking) rate units [10][11][12][13][14][15] to networks of sparsely coupled spiking neurons. In particular, we showed that the mean-field equations, Eqs. ...
Preprint
Full-text available
A complex interplay of single-neuron properties and the recurrent network structure shapes the activity of individual cortical neurons, which differs in general from the respective population activity. We develop a theory that makes it possible to investigate the influence of both network structure and single-neuron properties on the single-neuron statistics in block-structured sparse random networks of spiking neurons. In particular, the theory predicts the neuron-level autocorrelation times, also known as intrinsic timescales, of the neuronal activity. The theory is based on a postulated extension of dynamic mean-field theory from rate networks to spiking networks, which is validated via simulations. It accounts for both static variability, e.g. due to a distributed number of incoming synapses per neuron, and dynamical fluctuations of the input. To illustrate the theory, we apply it to a balanced random network of leaky integrate-and-fire neurons, a balanced random network of generalized linear model neurons, and a biologically constrained network of leaky integrate-and-fire neurons. For the generalized linear model network, an analytical solution to the colored noise problem allows us to obtain self-consistent firing rate distributions, single-neuron power spectra, and intrinsic timescales. For the leaky integrate-and-fire networks, we obtain the same quantities by means of a novel analytical approximation of the colored noise problem that is valid in the fluctuation-driven regime. Our results provide a further step towards an understanding of the dynamics in recurrent spiking cortical networks.
... In view of our results, this would allow to dynamically shape the SNR depending on the requirements imposed by the behavioral context. While our theory is applicable to single units with D interacting variables, the effect of a single adaptation variable (D = 2) on the dynamics of random recurrent networks was also studied independently and simultaneously by another group [59], who reached results consistent with ours [60]. The authors of [59] used a slightly different network architecture and did not focus on the relation between single neuron response and spectral properties, but rather on the correlation time of the network activity and on the effect of white noise input. ...
... While our theory is applicable to single units with D interacting variables, the effect of a single adaptation variable (D = 2) on the dynamics of random recurrent networks was also studied independently and simultaneously by another group [59], who reached results consistent with ours [60]. The authors of [59] used a slightly different network architecture and did not focus on the relation between single neuron response and spectral properties, but rather on the correlation time of the network activity and on the effect of white noise input. One major difference is the conclusion reached regarding correlation time: by using a different definition, in [59] the authors conclude that the correlation time does not scale with the adaptation timescale. ...
... The authors of [59] used a slightly different network architecture and did not focus on the relation between single neuron response and spectral properties, but rather on the correlation time of the network activity and on the effect of white noise input. One major difference is the conclusion reached regarding correlation time: by using a different definition, in [59] the authors conclude that the correlation time does not scale with the adaptation timescale. Based on our analysis, we infer that the definition of correlation time used in [59] captures only the oscillatory contribution to the correlation time, and not its long tail. ...
Article
Full-text available
While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of “resonant chaos”, characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks.
... This is a manifestly self-organized slowing down. Other mechanisms for slowing down dynamics have been proposed where the slow timescales of the network dynamics are inherited from other slow internal processes such as synaptic filtering [65,66]; however, such mechanisms differ from the slowing due to gating; they do not seem to display the pinching and clumping, and they also do not rely on self-organized behavior. ...
... We may do the same forg 44 ,g 55 , andg 66 , but it turns out that x 2 and x 3 are sufficient to compute the spectral curve. Next, divide byg 11 and send allg ii → 0, keeping the ratios fixed. ...
Article
Full-text available
Recurrent neural networks (RNNs) are powerful dynamical models, widely used in machine learning (ML) and neuroscience. Prior theoretical work has focused on RNNs with additive interactions. However, gating, i.e., multiplicative, interactions are ubiquitous in real neurons and also the central feature of the best-performing RNNs in ML. Here, we show that gating offers flexible control of two salient features of the collective dynamics: (i) timescales and (ii) dimensionality. The gate controlling timescales leads to a novel, marginally stable state, where the network functions as a flexible integrator. Unlike previous approaches, gating permits this important function without parameter fine-tuning or special symmetries. Gates also provide a flexible, context-dependent mechanism to reset the memory trace, thus complementing the memory function. The gate modulating the dimensionality can induce a novel, discontinuous chaotic transition, where inputs push a stable system to strong chaotic activity, in contrast to the typically stabilizing effect of inputs. At this transition, unlike additive RNNs, the proliferation of critical points (topological complexity) is decoupled from the appearance of chaotic dynamics (dynamical complexity). The rich dynamics are summarized in phase diagrams, thus providing a map for principled parameter initialization choices to ML practitioners.
... Here, the term "dynamic" specifies that the input is approximated as a stochastic process that varies in time, in contrast to the notion of a mean-field theory in physics, which usually describes processes embedded in a constant field. DMFT has led to significant insights into the interrelation between network structure and intrinsic timescales for recurrent networks of (nonspiking) rate neurons [15][16][17][18][19][20][21][22][23]. In particular, it has been shown that very slow intrinsic timescales emerge close to a transition to chaos in autonomous networks [15]. ...
... Interestingly, simply adding a noisy input to the network significantly reduces this effect and even leads to a novel dynamical regime [21]. Furthermore, increasing the complexity of the singleneuron dynamics reveals that timescales of slow adaptive currents are not straightforwardly expressed in the network dynamics [22], and leads to yet another dynamical regime termed "resonant chaos" [23]. In combination, these results suggest that the mechanisms shaping the intrinsic timescales in recurrent networks are highly involved. ...
Article
Full-text available
A complex interplay of single-neuron properties and the recurrent network structure shapes the activity of cortical neurons. The single-neuron activity statistics differ in general from the respective population statistics, including spectra and, correspondingly, autocorrelation times. We develop a theory for self-consistent second-order single-neuron statistics in block-structured sparse random networks of spiking neurons. In particular, the theory predicts the neuron-level autocorrelation times, also known as intrinsic timescales, of the neuronal activity. The theory is based on an extension of dynamic mean-field theory from rate networks to spiking networks, which is validated via simulations. It accounts for both static variability, e.g., due to a distributed number of incoming synapses per neuron, and temporal fluctuations of the input. We apply the theory to balanced random networks of generalized linear model neurons, balanced random networks of leaky integrate-and-fire neurons, and a biologically constrained network of leaky integrate-and-fire neurons. For the generalized linear model network with an error function nonlinearity, a novel analytical solution of the colored noise problem allows us to obtain self-consistent firing rate distributions, single-neuron power spectra, and intrinsic timescales. For the leaky integrate-and-fire networks, we derive an approximate analytical solution of the colored noise problem, based on the Stratonovich approximation of the Wiener-Rice series and a novel analytical solution for the free upcrossing statistics. Again closing the system self-consistently, in the fluctuation-driven regime, this approximation yields reliable estimates of the mean firing rate and its variance across neurons, the interspike-interval distribution, the single-neuron power spectra, and intrinsic timescales. With the help of our theory, we find parameter regimes where the intrinsic timescale significantly exceeds the membrane time constant, which indicates the influence of the recurrent dynamics. Although the resulting intrinsic timescales are on the same order for generalized linear model neurons and leaky integrate-and-fire neurons, the two systems differ fundamentally: for the former, the longer intrinsic timescale arises from an increased firing probability after a spike; for the latter, it is a consequence of a prolonged effective refractory period with a decreased firing probability. Furthermore, the intrinsic timescale attains a maximum at a critical synaptic strength for generalized linear model networks, in contrast to the minimum found for leaky integrate-and-fire networks.
... Multidimensional mean-field models in theoretical neuroscience are challenging to analyse [35,43,1,26] but their study is a necessary step towards understanding how multiple timescales present at the single-neuron level [34,39] affect the dynamics of large networks of neurons. ...
... The second step towards the exponential stability proof is the study of the existence and uniqueness of the stationary solutions to (1). For this step, we require: ...
Preprint
Full-text available
We study the asymptotic stability of a two-dimensional mean-field equation, which takes the form of a nonlocal transport equation and generalizes the time-elapsed neuron network model by the inclusion of a leaky memory variable. This additional variable can represent a slow fatigue mechanism, like spike frequency adaptation or short-term synaptic depression. Even though two-dimensional models are known to have emergent behaviors, like population bursts, which are not observed in standard one-dimensional models, we show that in the weak connectivity regime, two-dimensional models behave like one-dimensional models, i.e. they relax to a unique stationary state. The proof is based on an application of Harris' ergodic theorem and a perturbation argument, adapted to the case of a multidimensional equation with delays.
... Specifically, they found that longer intraregional ACW is related to higher degrees of FC between that region's ACW and all other brain regions (see also [37][38][39][40][41][42]). Transmodal regions with longer ACW display stronger FC to other regions. By contrast, unimodal regions with their shorter ACW are less connected (FC) to other regions in both non-human primates [43] and humans [12,14,18,22,25,37,[44][45][46][47][48][49] (see [50][51][52][53][54][55][56][57] for cellular-and population-based feedback mechanisms of inter-regional connectivity yielding intraregional timescales; see also [3][4][5] and [19,27] for more details on the cellular basis of INT including excitation and inhibition). ...
Article
We are continuously bombarded by external inputs of various timescales from the environment. How does the brain process this multitude of timescales? Recent resting state studies show a hierarchy of intrinsic neural timescales (INT) with a shorter duration in unimodal regions (e.g., visual cortex and auditory cortex) and a longer duration in transmodal regions (e.g., default mode network). This unimodal–transmodal hierarchy is present across acquisition modalities [electroencephalogram (EEG)/magnetoencephalogram (MEG) and fMRI] and can be found in different species and during a variety of different task states. Together, this suggests that the hierarchy of INT is central to the temporal integration (combining successive stimuli) and segregation (separating successive stimuli) of external inputs from the environment, leading to temporal segmentation and prediction in perception and cognition.
... This oscillatory region coexists with the fast excitatory-inhibitory oscillation. Other computational studies have focused on the origin of this adaptation-mediated oscillation [29][30][31], the interaction of adaptation with noise-induced state switching between up-and down-state [29,34,35] and how adaptation affects the intrinsic timescales of the network [36,37]. ...
Article
Full-text available
Electrical stimulation of neural systems is a key tool for understanding neural dynamics and ultimately for developing clinical treatments. Many applications of electrical stimulation affect large populations of neurons. However, computational models of large networks of spiking neurons are inherently hard to simulate and analyze. We evaluate a reduced mean-field model of excitatory and inhibitory adaptive exponential integrate-and-fire (AdEx) neurons which can be used to efficiently study the effects of electrical stimulation on large neural populations. The rich dynamical properties of this basic cortical model are described in detail and validated using large network simulations. Bifurcation diagrams reflecting the network’s state reveal asynchronous up- and down-states, bistable regimes, and oscillatory regions corresponding to fast excitation-inhibition and slow excitation-adaptation feedback loops. The biophysical parameters of the AdEx neuron can be coupled to an electric field with realistic field strengths which then can be propagated up to the population description. We show how on the edge of bifurcation, direct electrical inputs cause network state transitions, such as turning on and off oscillations of the population rate. Oscillatory input can frequency-entrain and phase-lock endogenous oscillations. Relatively weak electric field strengths on the order of 1 V/m are able to produce these effects, indicating that field effects are strongly amplified in the network. The effects of time-varying external stimulation are well-predicted by the mean-field model, further underpinning the utility of low-dimensional neural mass models.
... Our results therefore argue for a potential role of network interactions in shaping OFF responses in auditory cortex. Ultimately, future work could combine both single-cell and network mechanisms in a network model with more complex intrinsic properties of individual neurons (Beiran and Ostojic, 2019;Muscinelli et al., 2019). ...
Preprint
Full-text available
Across sensory systems, complex spatio-temporal patterns of neural activity arise following the onset (ON) and offset (OFF) of stimuli. While ON responses have been widely studied, the mechanisms generating OFF responses in cortical areas have so far not been fully elucidated. We examine here the hypothesis that OFF responses are single-cell signatures of recurrent interactions at the network level. To test this hypothesis, we performed population analyses of two-photon calcium recordings in the auditory cortex of awake mice listening to auditory stimuli, and compared linear single-cell and network models. While the single-cell model explained some prominent features of the data, it could not capture the structure across stimuli and trials. In contrast, the network model accounted for the low-dimensional organisation of population responses and their global structure across stimuli, where distinct stimuli activated mostly orthogonal dimensions in the neural state-space.
Article
Full-text available
We process and integrate multiple timescales into one meaningful whole. Recent evidence suggests that the brain displays a complex multiscale temporal organization. Different regions exhibit different timescales as described by the concept of intrinsic neural timescales (INT); however, their function and neural mechanisms remains unclear. We review recent literature on INT and propose that they are key for input processing. Specifically, they are shared across different species, i.e., input sharing. This suggests a role of INT in encoding inputs through matching the inputs’ stochastics with the ongoing temporal statistics of the brain’s neural activity, i.e., input encoding. Following simulation and empirical data, we point out input integration versus segregation and input sampling as key temporal mechanisms of input processing. This deeply grounds the brain within its environmental and evolutionary context. It carries major implications in understanding mental features and psychiatric disorders, as well as going beyond the brain in integrating timescales into artificial intelligence.
Article
Full-text available
Across sensory systems, complex spatio-temporal patterns of neural activity arise following the onset (ON) and offset (OFF) of stimuli. While ON responses have been widely studied, the mechanisms generating OFF responses in cortical areas have so far not been fully elucidated. We examine here the hypothesis that OFF responses are single-cell signatures of recurrent interactions at the network level. To test this hypothesis, we performed population analyses of two-photon calcium recordings in the auditory cortex of awake mice listening to auditory stimuli, and compared linear single-cell and network models. While the single-cell model explained some prominent features of the data, it could not capture the structure across stimuli and trials. In contrast, the network model accounted for the low-dimensional organisation of population responses and their global structure across stimuli, where distinct stimuli activated mostly orthogonal dimensions in the neural state-space.
Article
Full-text available
While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of “resonant chaos”, characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks.
Article
Full-text available
Autonomous, randomly coupled, neural networks display a transition to chaos at a critical coupling strength. Here, we investigate the effect of a time-varying input on the onset of chaos and the resulting consequences for information processing. Dynamic mean-field theory yields the statistics of the activity, the maximum Lyapunov exponent, and the memory capacity of the network. We find an exact condition that determines the transition from stable to chaotic dynamics and the sequential memory capacity in closed form. The input suppresses chaos by a dynamic mechanism, shifting the transition to significantly larger coupling strengths than predicted by local stability analysis. Beyond linear stability, a regime of coexistent locally expansive but nonchaotic dynamics emerges that optimizes the capacity of the network to store sequential input.
Article
Full-text available
The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. [...] Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. [...] The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. [...] Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.
Article
Full-text available
Populations of neurons display an extraordinary diversity in the types of problems they solve and behaviors they display. Examples include generating the complicated motor outputs involved in grasping motions to storing and recalling a specific song for songbird mating. While it is still unknown how populations of neurons can learn to solve such a diverse set of problems, techniques have recently emerged that allow us to determine how to couple neurons to form networks that solve tasks of similar complexity. The most versatile of these approaches are referred to as reservoir computing based techniques. Examples include the FORCE method, a novel technique that harnesses the chaos present in a large, nonlinear system to learn arbitrary dynamics. Unfortunately, little work has been done in directly applying FORCE training to spiking neural networks. Here, we demonstrate the direct applicability of the FORCE method to spiking neurons by training networks to mimic various dynamical systems. As populations of neurons can display much more interesting behaviors then reproducing simple dynamical systems, we trained spiking neural networks to also reproduce sophisticated tasks such as input classification and storing a precise sequence that correspond to the notes of a song. For all the networks trained, firing rates and spiking statistics were constrained to be within biologically plausible regimes.
Article
Networks of spiking neurons (SNNs) are frequently studied as models for networks of neurons in the brain, but also as paradigm for novel energy efficient computing hardware. In principle they are especially suitable for computations in the temporal domain, such as speech processing, because their computations are carried out via events in time and space. But so far they have been lacking the capability to preserve information for longer time spans during a computation, until it is updated or needed - like a register of a digital computer. This function is provided to artificial neural networks through Long Short-Term Memory (LSTM) units. We show here that SNNs attain similar capabilities if one includes adapting neurons in the network. Adaptation denotes an increase of the firing threshold of a neuron after preceding firing. A substantial fraction of neurons in the neocortex of rodents and humans has been found to be adapting. It turns out that if adapting neurons are integrated in a suitable manner into the architecture of SNNs, the performance of these enhanced SNNs, which we call LSNNs, for computation in the temporal domain approaches that of artificial neural networks with LSTM-units. In addition, the computing and learning capabilities of LSNNs can be substantially enhanced through learning-to-learn (L2L) methods from machine learning, that have so far been applied primarily to LSTM networks and apparently never to SSNs. This preliminary report on arXiv will be replaced by a more detailed version in about a month.
Book
Information flow as nerve impulses in neuronal circuits is regulated at synapses. The synapse is therefore a key element for information processing in the brain. Much attention has been given to fast synaptic transmission, which predominantly regulates impulse-to-impulse transmission. Slow synaptic transmission and modu­ lation, however, sometimes have been neglected in considering and attempting to understand brain function. Slow synaptic potentials and modulation occur with a considerable delay in response to the accumulation of synaptic and modulatory inputs. In these contexts, they are plastic in nature and play important roles in information processing in the brain. A symposium titled "Slow Synaptic Responses and Modulation" was held as the satellite symposium to the 75th Annual Meeting of the Physiological Society of Japan on March 30-31, 1998, in Kanazawa. The theme was selected not only for the reason mentioned above, but also because of the considerable involvement of many Japanese scholars in establishing the basic issues. Following the dawn of synaptic physiological research, as Sir John Eccles, Sir Bernard Katz, and Professor Stephen Kuffler carried out pioneer work, Professor Kyozou Koketsu and Professor Benjamin Libet, the students of Sir John Eccles, and their colleagues established the concept of slow synaptic responses and modulation by studying vertebrate sympathetic ganglia. Since then, the concept has been ex­ panded with detailed investigations of both peripheral and central synapses at the levels of single ion channels, intracellular Ca"+ dynamics, intracellular transduc­ tion mechanisms, and genes.
Article
Large scale recordings of neural activity in behaving animals have established that the transformation of sensory stimuli into motor outputs relies on low-dimensional dynamics at the population level, while individual neurons generally exhibit complex, mixed selectivity. Understanding how low-dimensional computations on mixed, distributed representations emerge from the structure of the recurrent connectivity and inputs to cortical networks is a major challenge. Classical models of recurrent networks fall in two extremes: on one hand balanced networks are based on fully random connectivity and generate high-dimensional spontaneous activity, while on the other hand strongly structured, clustered networks lead to low-dimensional dynamics and ad-hoc computations but rely on pure selectivity. A number of functional approaches for training recurrent networks however suggest that a specific type of minimal connectivity structure is sufficient to implement a large range of computations. Starting from this observation, here we study a new class of recurrent network models in which the connectivity consists of a combination of a random part and a minimal, low dimensional structure. We show that in such low-rank recurrent networks, the dynamics are low-dimensional and can be directly inferred from connectivity using a geometrical approach. We exploit this understanding to determine minimal connectivity structures required to implement specific computations. We find that the dynamical range and computational capacity of a network quickly increases with the dimensionality of the structure in the connectivity, so that a rank-two structure is already sufficient to implement a complex behavioral task such as context-dependent decision-making.
Article
The brain must both react quickly to new inputs as well as store a memory of past activity. This requires biology that operates over a vast range of time scales. Fast time scales are determined by the kinetics of synaptic conductances and ionic channels; however, the mechanics of slow time scales are more complicated. In this opinion article we review two distinct network-based mechanisms that impart slow time scales in recurrently coupled neuronal networks. The first is in strongly coupled networks where the time scale of the internally generated fluctuations diverges at the transition between stable and chaotic firing rate activity. The second is in networks with finitely many members where noise-induced transitions between metastable states appear as a slow time scale in the ongoing network firing activity. We discuss these mechanisms with an emphasis on their similarities and differences.
Article
Networks of randomly connected neurons are among the most popular models in theoretical neuroscience. The connectivity between neurons in the cortex is however not fully random, the simplest and most prominent deviation from randomness found in experimental data being the overrepresentation of bidirectional connections among pyramidal cells. Using numerical and analytical methods, we investigated the effects of partially symmetric connectivity on dynamics in networks of rate units. We considered the two dynamical regimes exhibited by random neural networks: the weak-coupling regime, where the firing activity decays to a single fixed point unless the network is stimulated, and the strong-coupling or chaotic regime, characterized by internally generated fluctuating firing rates. In the weak-coupling regime, we computed analytically for an arbitrary degree of symmetry the auto-correlation of network activity in presence of external noise. In the chaotic regime, we performed simulations to determine the timescale of the intrinsic fluctuations. In both cases, symmetry increases the characteristic asymptotic decay time of the autocorrelation function and therefore slows down the dynamics in the network.
Article
Recurrent neural networks (RNNs) are a class of computational models that are often used as a tool to explain neurobiological phenomena, considering anatomical, electrophysiological and computational constraints. RNNs can either be designed to implement a certain dynamical principle, or they can be trained by input–output examples. Recently, there has been large progress in utilizing trained RNNs both for computational tasks, and as explanations of neural phenomena. I will review how combining trained RNNs with reverse engineering can provide an alternative framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool. Despite the recent progress and potential benefits, there are many fundamental gaps towards a theory of these networks. I will discuss these challenges and possible methods to attack them.