Available via license: CC BY

Content may be subject to copyright.

A preview of the PDF is not available

Neural activity in awake behaving animals exhibits a vast range of timescales that can be several fold larger than the membrane time constant of individual neurons. Two types of mechanisms have been proposed to explain this conundrum. One possibility is that large timescales are generated by a network mechanism based on positive feedback, but this hypothesis requires fine-tuning of the strength or structure of the synaptic connections. A second possibility is that large timescales in the neural dynamics are inherited from large timescales of underlying biophysical processes, two prominent candidates being intrinsic adaptive ionic currents and synaptic transmission. How the timescales of adaptation or synaptic transmission influence the timescale of the network dynamics has however not been fully explored. To address this question, here we analyze large networks of randomly connected excitatory and inhibitory units with additional degrees of freedom that correspond to adaptation or synaptic filtering. We determine the fixed points of the systems, their stability to perturbations and the corresponding dynamical timescales. Furthermore, we apply dynamical mean field theory to study the temporal statistics of the activity in the fluctuating regime, and examine how the adaptation and synaptic timescales transfer from individual units to the whole population. Our overarching finding is that synaptic filtering and adaptation in single neurons have very different effects at the network level. Unexpectedly, the macroscopic network dynamics do not inherit the large timescale present in adaptive currents. In contrast, the timescales of network activity increase proportionally to the time constant of the synaptic filter. Altogether, our study demonstrates that the timescales of different biophysical processes have different effects on the network level, so that the slow processes within individual neurons do not necessarily induce slow activity in large recurrent neural networks.

Available via license: CC BY

Content may be subject to copyright.

A preview of the PDF is not available

... One possibility is that multiple timescales reflect biophysical properties of individual neurons within a local population. For example, two timescales can arise from mixing heterogeneous timescales of different neurons 44,45 or combining different biophysical processes, such as a fast membrane time constant and a slow synaptic time constant 46 . Alternatively, multiple timescales in local population activity can arise from spatiotemporal population dynamics in networks with spatially arranged connectivity 47 . ...

... The second model assumes that two timescales arise from two local biophysical processes, e.g., a fast membrane time constant and a slow synaptic time constant (Fig. 5b) 46 . We modeled the membrane time constant with the fast self-excitation timescale, and the synaptic time constant as a low-pass filter of the input to each unit with a slow time-constant τ synapse (Methods) 46 . ...

... The second model assumes that two timescales arise from two local biophysical processes, e.g., a fast membrane time constant and a slow synaptic time constant (Fig. 5b) 46 . We modeled the membrane time constant with the fast self-excitation timescale, and the synaptic time constant as a low-pass filter of the input to each unit with a slow time-constant τ synapse (Methods) 46 . The connectivity between units is random. ...

Intrinsic timescales characterize dynamics of endogenous fluctuations in neural activity. Variation of intrinsic timescales across the neocortex reflects functional specialization of cortical areas, but less is known about how intrinsic timescales change during cognitive tasks. We measured intrinsic timescales of local spiking activity within columns of area V4 in male monkeys performing spatial attention tasks. The ongoing spiking activity unfolded across at least two distinct timescales, fast and slow. The slow timescale increased when monkeys attended to the receptive fields location and correlated with reaction times. By evaluating predictions of several network models, we found that spatiotemporal correlations in V4 activity were best explained by the model in which multiple timescales arise from recurrent interactions shaped by spatially arranged connectivity, and attentional modulation of timescales results from an increase in the efficacy of recurrent interactions. Our results suggest that multiple timescales may arise from the spatial connectivity in the visual cortex and flexibly change with the cognitive state due to dynamic effective interactions between neurons.

... These studies relied on Wick's theorem to calculate the variance of covariances, which is, however, restricted to linear systems. Here we instead employ a more general replica approach that can be straightforwardly applied to nonlinear rate models [50], as extensively studied in the recent theoretical neuroscience literature [64,[69][70][71][72][73][74][75]. Importantly, the replica theory reveals in a systematic manner that the variance of covariances is an observable that is O(1/N ) in the network size and requires beyond-mean-field methods to be computed. ...

Understanding the coordination structure of neurons in neuronal networks is essential for unraveling the distributed information processing mechanisms in brain networks. Recent advancements in measurement techniques have resulted in an increasing amount of data on neural activities recorded in parallel, revealing largely heterogeneous correlation patterns across neurons. Yet, the mechanistic origin of this heterogeneity is largely unknown because existing theoretical approaches linking structure and dynamics in neural circuits are mostly restricted to average connection patterns. Here we present a systematic inclusion of variability in network connectivity via tools from statistical physics of disordered systems. We study networks of spiking leaky integrate-and-fire neurons and employ mean-field and linear-response methods to map the spiking networks to linear rate models with an equivalent neuron-resolved correlation structure. The latter models can be formulated in a field-theoretic language that allows using disorder-average and replica techniques to systematically derive quantitatively matching beyond-mean-field predictions for the mean and variance of cross-covariances as functions of the average and variability of connection patterns. We show that heterogeneity in covariances is not a result of variability in single-neuron firing statistics but stems from the sparse realization and variable strength of connections, as ubiquitously observed in brain networks. Average correlations between neurons are found to be insensitive to the level of heterogeneity, which in contrast modulates the variability of covariances across many orders of magnitude, giving rise to an efficient tuning of the complexity of coordination patterns in neuronal circuits.

... Neuronal filters allow neuronal systems to select certain information or enhance the communication of specific information components over others [1][2][3][4][5]. As such, neuronal filters play important roles in neuronal information processing, rhythm generation and brain computations [3,4,[6][7][8][9][10][11][12][13][14][15][16][17]. Band-pass frequency-filters are associated to the notion of resonance. ...

Neuronal filters can be thought of as constituent building blocks underlying the ability of neuronal systems to process information, generate rhythms and perform computations. How neuronal filters are generated by the concerted activity of a multiplicity of process and interacting time scales within and across levels of neuronal organization is poorly understood. In this paper we address these issues in a feedforward network in the presence of synaptic short-term plasticity (STP, depression and facilitation). The network consists of a presynaptic spike-train, a postsynaptic passive cell, and an excitatory (AMPA) chemical synapse. The dynamics of each network components is controlled by one or more time scales. We use mathematical modeling, numerical simulations and analytical approximations of the network response to presynaptic spike trains. We explain the mechanisms by which the participating time scales shape the neuronal filters at the (i) synaptic update level (the target of the synaptic variable in response to presynaptic spikes), which is shaped by STP, (ii) the synaptic variable, and (iii) the postsynaptic membrane potential. We focus on two metrics giving rise to two types of profiles (curves of the corresponding metrics as a function of the spike-train input frequency or firing rate): (i) peak profiles and (ii) peak-to-trough amplitude profiles. The effects of STP are present at the synaptic update level and are communicated to the synaptic level where they interact with the synaptic decay time. STP band-pass filters (BPFs) are reflected in the synaptic BPFs with some modifications due primarily to the synaptic decay time. The postsynaptic filters result from the interaction between the synaptic variable and the biophysical properties of the postsynaptic cell. Postsynaptic BPFs can be inherited from the synaptic level or generated across levels of organization due to the interaction between (i) a synaptic low-pass filter and the postsynaptic summation filter (voltage peak BPF), and (ii) a synaptic high-pass filter and the postsynaptic summation filter (peak-to-trough amplitude BPF). These type of BPFs persist in response to jitter periodic spike trains and Poisson-distributed spike trains. The response variability depends on a number of factors including the spike train input frequency and are controlled by STP in a non-monotonic frequency manner. The lessons learned from the investigation of this relatively simple feedforward network will serve to construct a framework to analyze the mechanisms of generation of neuronal filters in networks with more complex architectures and a variety of interacting cellular, synaptic and plasticity time scales.

... As an alternative to spiking models, a number of rate-based models have also been developed, including those that incorporate forms of after-spike currents. Muscinelli et al. (2019) and Beiran and Ostojic (2019) model afterspike currents in a form similar to equation 1.3 where I j represents the afterspike current and s represents the firing rate. This form enables a neuron's firing behavior to have an additive effect on the after-spike current but not a multiplicative effect. ...

Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain's ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-fire-rate (GLIFR) model, which is a rate equivalent of the generalized-leaky-integrate-and-fire model. The GLIFR model has multiple dynamical mechanisms, which add to the complexity of its activity while maintaining differentiability. We focus on the role of after-spike currents, currents induced or modulated by neuronal spikes, in producing rich temporal dynamics. We use machine learning techniques to learn both synaptic weights and parameters underlying intrinsic dynamics to solve temporal tasks. The GLIFR model allows the use of standard gradient descent techniques rather than surrogate gradient descent, which has been used in spiking neural networks. After establishing the ability to optimize parameters using gradient descent in single neurons, we ask how networks of GLIFR neurons learn and perform on temporally challenging tasks, such as sequential MNIST. We find that these networks learn diverse parameters, which gives rise to diversity in neuronal dynamics, as demonstrated by clustering of neuronal parameters. GLIFR networks have mixed performance when compared to vanilla recurrent neural networks, with higher performance in pixel-by-pixel MINST but lower in line-by-line MNIST. However, they appear to be more robust to random silencing. We find that the ability to learn heterogeneity and the presence of after-spike currents contribute to these gains in performance. Our work demonstrates both the computational robustness of neuronal complexity and diversity in networks and a feasible method of training such models using exact gradients.

... If the input is increased and STD is decreased, either directly or indirectly by increasing SFA, the resulting PSDs show a "tilt" (Fig 7B). Generally, filters that act directly upon the membrane current, like STD, more strongly effect resulting timescales than filters that act upon hidden variables, like SFA [37]. ...

The relationship between macroscale electrophysiological recordings and the dynamics of underlying neural activity remains unclear. We have previously shown that low frequency EEG activity (<1 Hz) is decreased at the seizure onset zone (SOZ), while higher frequency activity (1–50 Hz) is increased. These changes result in power spectral densities (PSDs) with flattened slopes near the SOZ, which are assumed to be areas of increased excitability. We wanted to understand possible mechanisms underlying PSD changes in brain regions of increased excitability. We hypothesized that these observations are consistent with changes in adaptation within the neural circuit. We developed a theoretical framework and tested the effect of adaptation mechanisms, such as spike frequency adaptation and synaptic depression, on excitability and PSDs using filter-based neural mass models and conductance-based models. We compared the contribution of single timescale adaptation and multiple timescale adaptation. We found that adaptation with multiple timescales alters the PSDs. Multiple timescales of adaptation can approximate fractional dynamics, a form of calculus related to power laws, history dependence, and non-integer order derivatives. Coupled with input changes, these dynamics changed circuit responses in unexpected ways. Increased input without synaptic depression increases broadband power. However, increased input with synaptic depression may decrease power. The effects of adaptation were most pronounced for low frequency activity (< 1Hz). Increased input combined with a loss of adaptation yielded reduced low frequency activity and increased higher frequency activity, consistent with clinical EEG observations from SOZs. Spike frequency adaptation and synaptic depression, two forms of multiple timescale adaptation, affect low frequency EEG and the slope of PSDs. These neural mechanisms may underlie changes in EEG activity near the SOZ and relate to neural hyperexcitability. Neural adaptation may be evident in macroscale electrophysiological recordings and provide a window to understanding neural circuit excitability.

... Whereas Hebbian plasticity attracts the neuronal state to its history, anti-Hebbian plasticity repels the neuronal state away from its history. This quickens chaos, tightening C φ (τ ), and generates an oscillatory component in neuronal activity, creating oscillations in C φ (τ ) during its decay to zero ( Fig. 3A; see [41,42] for another example of this effect). While finite-size simulations of the non-plastic system of [33] can exhibit limit cycles, our calculation of C φ (τ ) in the limit N → ∞ reveals that this plasticity-driven oscillatory component is not merely a finite-size effect. ...

In neural circuits, synapses influence neurons by shaping network dynamics, and neurons influence synapses through activity-dependent plasticity. Motivated by this fact, we study a network model in which neurons and synapses are mutually coupled dynamic variables. Model neurons obey dynamics shaped by synaptic couplings that fluctuate, in turn, about quenched random strengths in response to pre- and postsynaptic neuronal activity. Using dynamical mean-field theory, we compute the phase diagram of the combined neuronal-synaptic system, revealing several novel phases suggestive of computational function. In the regime in which the non-plastic system is chaotic, Hebbian plasticity slows chaos, while anti-Hebbian plasticity quickens chaos and generates an oscillatory component in neuronal activity. Deriving the spectrum of the joint neuronal-synaptic Jacobian reveals that these behaviors manifest as differential effects of eigenvalue repulsion. In the regime in which the non-plastic system is quiescent, Hebbian plasticity can induce chaos. In both regimes, sufficiently strong Hebbian plasticity creates exponentially many stable neuronal-synaptic fixed points that coexist with chaotic states. Finally, in chaotic states with sufficiently strong Hebbian plasticity, halting synaptic dynamics leaves a stable fixed point of neuronal dynamics, freezing the neuronal state. This phase of freezable chaos provides a novel mechanism of synaptic working memory in which a stable fixed point of neuronal dynamics is continuously destabilized through synaptic dynamics, allowing any neuronal state to be stored as a stable fixed point by halting synaptic plasticity.

... When the mean of excitatory and inhibitory weights approximately cancel each another, the corresponding eigenvalue is small and resides within the bulk. A number of works have examined the bulk of the eigenvalue spectrum for random matrices [11,18,44,53,54,56,66,67], and showed that the obtained eigenvalue statistics have important implications for network dynamics such as spontaneous fluctuations [68], oscillations [69,70] and correlations in asynchronous irregular activity [71]. In contrast, in this work, we focus on the parameter regime where the discrete eigenvalues are outliers and well separated from the eigenvalue bulk. ...

How the connectivity of cortical networks determines the neural dynamics and the resulting computations is one of the key questions in neuroscience. Previous works have pursued two complementary approaches to quantify the structure in connectivity. One approach starts from the perspective of biological experiments where only the local statistics of connectivity motifs between small groups of neurons are accessible. Another approach is based instead on the perspective of artificial neural networks where the global connectivity matrix is known, and in particular its low-rank structure can be used to determine the resulting low-dimensional dynamics. A direct relationship between these two approaches is however currently missing, and in particular it remains to be clarified how local connectivity statistics and the global low-rank connectivity structure are inter-related and shape the low-dimensional activity. To bridge this gap, here we develop a method for mapping local connectivity statistics onto an approximate global low-rank structure. Our method rests on approximating the global connectivity matrix using dominant eigenvectors, which we compute using perturbation theory for random matrices. We demonstrate that multi-population networks defined from local connectivity statistics for which the central limit theorem holds can be approximated by low-rank connectivity with Gaussian-mixture statistics. We specifically apply this method to excitatory-inhibitory networks with reciprocal motifs, and show that it yields reliable predictions for both the low-dimensional dynamics, and statistics of population activity. Importantly, it analytically accounts for the activity heterogeneity of individual neurons in specific realizations of local connectivity. Altogether, our approach allows us to disentangle the effects of mean connectivity and reciprocal motifs on the global recurrent feedback, and provides an intuitive picture of how local connectivity shapes global network dynamics.

... If the input is increased and STD is decreased, either directly or indirectly by increasing SFA, the resulting PSDs show a "tilt" ( Figure 7b). Generally, filters that act directly upon the membrane current, like STD, more strongly effect resulting timescales than filters that act upon hidden variables, like SFA (37). ...

The relationship between macroscale electrophysiological recordings and the dynamics of underlying neural activity remains unclear. We have previously shown that low frequency EEG activity (<1 Hz) is decreased at the seizure onset zone (SOZ), while higher frequency activity (1-50 Hz) is increased. These changes result in power spectral densities (PSDs) with flattened slopes near the SOZ, which are assumed to be areas of increased excitability. We wanted to understand possible mechanisms underlying PSD changes in brain regions of increased excitability. We hypothesize that these observations are consistent with changes in adaptation within the neural circuit.
We developed a theoretical framework and tested the effect of adaptation mechanisms, such as spike frequency adaptation and synaptic depression, on excitability and PSDs using filter-based neural mass models and conductance-based models. We compared the contribution of single timescale adaptation and multiple timescale adaptation.
We found that adaptation with multiple timescales alters the PSDs. Multiple timescales of adaptation can approximate fractional dynamics, a form of calculus related to power laws, history dependence, and non-integer order derivatives. Coupled with input changes, these dynamics changed circuit responses in unexpected ways. Increased input without synaptic depression increases broadband power. However, increased input with synaptic depression may decrease power. The effects of adaptation were most pronounced for low frequency activity (< 1Hz). Increased input combined with a loss of adaptation yielded reduced low frequency activity and increased higher frequency activity, consistent with clinical EEG observations from SOZs.
Spike frequency adaptation and synaptic depression, two forms of multiple timescale adaptation, affect low frequency EEG and the slope of PSDs. These neural mechanisms may underlie changes in EEG activity near the SOZ and relate to neural hyperexcitability. Neural adaptation may be evident in macroscale electrophysiological recordings and provide a window to understanding neural circuit excitability.
Author Summary
Electrophysiological recordings such as EEG from the human brain often come from many thousands of neurons or more. It can be difficult to relate recorded activity to characteristics of the underlying neurons and neural circuits. Here, we use novel theoretical framework and computational neural models to understand how neural adaptation might be evident in human EEG recordings. Neural adaptation includes mechanisms such as spike frequency adaptation and short-term depression that emphasize stimulus changes and help promote stability. Our results suggest that changes in neural adaptation affect EEG signals, especially at low frequencies. Further, adaptation can lead to changes related to fractional derivatives, a kind of calculus with non-integer orders. Neural adaptation may provide a window into understanding specific aspects of neuron excitability even from EEG recordings.

Neural networks are high-dimensional nonlinear dynamical systems that process information through the coordinated activity of many connected units. Understanding how biological and machine-learning networks function and learn requires knowledge of the structure of this coordinated activity, information contained, for example, in cross covariances between units. Self-consistent dynamical mean field theory (DMFT) has elucidated several features of random neural networks—in particular, that they can generate chaotic activity—however, a calculation of cross covariances using this approach has not been provided. Here, we calculate cross covariances self-consistently via a two-site cavity DMFT. We use this theory to probe spatiotemporal features of activity coordination in a classic random-network model with independent and identically distributed (i.i.d.) couplings, showing an extensive but fractionally low effective dimension of activity and a long population-level timescale. Our formulas apply to a wide range of single-unit dynamics and generalize to non-i.i.d. couplings. As an example of the latter, we analyze the case of partially symmetric couplings.

Although the generation of movements is a fundamental function of the nervous system, the underlying neural principles remain unclear. As flexor and extensor muscle activities alternate during rhythmic movements such as walking, it is often assumed that the responsible neural circuitry is similarly exhibiting alternating activity¹. Here we present ensemble recordings of neurons in the lumbar spinal cord that indicate that, rather than alternating, the population is performing a low-dimensional ‘rotation’ in neural space, in which the neural activity is cycling through all phases continuously during the rhythmic behaviour. The radius of rotation correlates with the intended muscle force, and a perturbation of the low-dimensional trajectory can modify the motor behaviour. As existing models of spinal motor control do not offer an adequate explanation of rotation1,2, we propose a theory of neural generation of movements from which this and other unresolved issues, such as speed regulation, force control and multifunctionalism, are readily explained.

While most models of randomly connected neural networks assume single-neuron models with simple dynamics, neurons in the brain exhibit complex intrinsic dynamics over multiple timescales. We analyze how the dynamical properties of single neurons and recurrent connections interact to shape the effective dynamics in large randomly connected networks. A novel dynamical mean-field theory for strongly connected networks of multi-dimensional rate neurons shows that the power spectrum of the network activity in the chaotic phase emerges from a nonlinear sharpening of the frequency response function of single neurons. For the case of two-dimensional rate neurons with strong adaptation, we find that the network exhibits a state of “resonant chaos”, characterized by robust, narrow-band stochastic oscillations. The coherence of stochastic oscillations is maximal at the onset of chaos and their correlation time scales with the adaptation timescale of single units. Surprisingly, the resonance frequency can be predicted from the properties of isolated neurons, even in the presence of heterogeneity in the adaptation parameters. In the presence of these internally-generated chaotic fluctuations, the transmission of weak, low-frequency signals is strongly enhanced by adaptation, whereas signal transmission is not influenced by adaptation in the non-chaotic regime. Our theoretical framework can be applied to other mechanisms at the level of single neurons, such as synaptic filtering, refractoriness or spike synchronization. These results advance our understanding of the interaction between the dynamics of single units and recurrent connectivity, which is a fundamental step toward the description of biologically realistic neural networks.

Autonomous, randomly coupled, neural networks display a transition to chaos at a critical coupling strength. Here, we investigate the effect of a time-varying input on the onset of chaos and the resulting consequences for information processing. Dynamic mean-field theory yields the statistics of the activity, the maximum Lyapunov exponent, and the memory capacity of the network. We find an exact condition that determines the transition from stable to chaotic dynamics and the sequential memory capacity in closed form. The input suppresses chaos by a dynamic mechanism, shifting the transition to significantly larger coupling strengths than predicted by local stability analysis. Beyond linear stability, a regime of coexistent locally expansive but nonchaotic dynamics emerges that optimizes the capacity of the network to store sequential input.

The spiking activity of single neurons can be well described by a nonlinear integrate-and-fire model that includes somatic adaptation. When exposed to fluctuating inputs sparsely coupled populations of these model neurons exhibit stochastic collective dynamics that can be effectively characterized using the Fokker-Planck equation. [...] Here we derive from that description four simple models for the spike rate dynamics in terms of low-dimensional ordinary differential equations using two different reduction techniques: one uses the spectral decomposition of the Fokker-Planck operator, the other is based on a cascade of two linear filters and a nonlinearity, which are determined from the Fokker-Planck equation and semi-analytically approximated. We evaluate the reduced models for a wide range of biologically plausible input statistics and find that both approximation approaches lead to spike rate models that accurately reproduce the spiking behavior of the underlying adaptive integrate-and-fire population. [...] The low-dimensional models also well reproduce stable oscillatory spike rate dynamics that are generated either by recurrent synaptic excitation and neuronal adaptation or through delayed inhibitory synaptic feedback. [...] Therefore we have made available implementations that allow to numerically integrate the low-dimensional spike rate models as well as the Fokker-Planck partial differential equation in efficient ways for arbitrary model parametrizations as open source software. The derived spike rate descriptions retain a direct link to the properties of single neurons, allow for convenient mathematical analyses of network states, and are well suited for application in neural mass/mean-field based brain network models.

Populations of neurons display an extraordinary diversity in the types of problems they solve and behaviors they display. Examples include generating the complicated motor outputs involved in grasping motions to storing and recalling a specific song for songbird mating. While it is still unknown how populations of neurons can learn to solve such a diverse set of problems, techniques have recently emerged that allow us to determine how to couple neurons to form networks that solve tasks of similar complexity. The most versatile of these approaches are referred to as reservoir computing based techniques. Examples include the FORCE method, a novel technique that harnesses the chaos present in a large, nonlinear system to learn arbitrary dynamics. Unfortunately, little work has been done in directly applying FORCE training to spiking neural networks. Here, we demonstrate the direct applicability of the FORCE method to spiking neurons by training networks to mimic various dynamical systems. As populations of neurons can display much more interesting behaviors then reproducing simple dynamical systems, we trained spiking neural networks to also reproduce sophisticated tasks such as input classification and storing a precise sequence that correspond to the notes of a song. For all the networks trained, firing rates and spiking statistics were constrained to be within biologically plausible regimes.

Networks of spiking neurons (SNNs) are frequently studied as models for networks of neurons in the brain, but also as paradigm for novel energy efficient computing hardware. In principle they are especially suitable for computations in the temporal domain, such as speech processing, because their computations are carried out via events in time and space. But so far they have been lacking the capability to preserve information for longer time spans during a computation, until it is updated or needed - like a register of a digital computer. This function is provided to artificial neural networks through Long Short-Term Memory (LSTM) units. We show here that SNNs attain similar capabilities if one includes adapting neurons in the network. Adaptation denotes an increase of the firing threshold of a neuron after preceding firing. A substantial fraction of neurons in the neocortex of rodents and humans has been found to be adapting. It turns out that if adapting neurons are integrated in a suitable manner into the architecture of SNNs, the performance of these enhanced SNNs, which we call LSNNs, for computation in the temporal domain approaches that of artificial neural networks with LSTM-units. In addition, the computing and learning capabilities of LSNNs can be substantially enhanced through learning-to-learn (L2L) methods from machine learning, that have so far been applied primarily to LSTM networks and apparently never to SSNs. This preliminary report on arXiv will be replaced by a more detailed version in about a month.

Information flow as nerve impulses in neuronal circuits is regulated at synapses. The synapse is therefore a key element for information processing in the brain. Much attention has been given to fast synaptic transmission, which predominantly regulates impulse-to-impulse transmission. Slow synaptic transmission and modu lation, however, sometimes have been neglected in considering and attempting to understand brain function. Slow synaptic potentials and modulation occur with a considerable delay in response to the accumulation of synaptic and modulatory inputs. In these contexts, they are plastic in nature and play important roles in information processing in the brain. A symposium titled "Slow Synaptic Responses and Modulation" was held as the satellite symposium to the 75th Annual Meeting of the Physiological Society of Japan on March 30-31, 1998, in Kanazawa. The theme was selected not only for the reason mentioned above, but also because of the considerable involvement of many Japanese scholars in establishing the basic issues. Following the dawn of synaptic physiological research, as Sir John Eccles, Sir Bernard Katz, and Professor Stephen Kuffler carried out pioneer work, Professor Kyozou Koketsu and Professor Benjamin Libet, the students of Sir John Eccles, and their colleagues established the concept of slow synaptic responses and modulation by studying vertebrate sympathetic ganglia. Since then, the concept has been ex panded with detailed investigations of both peripheral and central synapses at the levels of single ion channels, intracellular Ca"+ dynamics, intracellular transduc tion mechanisms, and genes.

Large scale recordings of neural activity in behaving animals have established that the transformation of sensory stimuli into motor outputs relies on low-dimensional dynamics at the population level, while individual neurons generally exhibit complex, mixed selectivity. Understanding how low-dimensional computations on mixed, distributed representations emerge from the structure of the recurrent connectivity and inputs to cortical networks is a major challenge. Classical models of recurrent networks fall in two extremes: on one hand balanced networks are based on fully random connectivity and generate high-dimensional spontaneous activity, while on the other hand strongly structured, clustered networks lead to low-dimensional dynamics and ad-hoc computations but rely on pure selectivity. A number of functional approaches for training recurrent networks however suggest that a specific type of minimal connectivity structure is sufficient to implement a large range of computations. Starting from this observation, here we study a new class of recurrent network models in which the connectivity consists of a combination of a random part and a minimal, low dimensional structure. We show that in such low-rank recurrent networks, the dynamics are low-dimensional and can be directly inferred from connectivity using a geometrical approach. We exploit this understanding to determine minimal connectivity structures required to implement specific computations. We find that the dynamical range and computational capacity of a network quickly increases with the dimensionality of the structure in the connectivity, so that a rank-two structure is already sufficient to implement a complex behavioral task such as context-dependent decision-making.

The brain must both react quickly to new inputs as well as store a memory of past activity. This requires biology that operates over a vast range of time scales. Fast time scales are determined by the kinetics of synaptic conductances and ionic channels; however, the mechanics of slow time scales are more complicated. In this opinion article we review two distinct network-based mechanisms that impart slow time scales in recurrently coupled neuronal networks. The first is in strongly coupled networks where the time scale of the internally generated fluctuations diverges at the transition between stable and chaotic firing rate activity. The second is in networks with finitely many members where noise-induced transitions between metastable states appear as a slow time scale in the ongoing network firing activity. We discuss these mechanisms with an emphasis on their similarities and differences.

Networks of randomly connected neurons are among the most popular models in theoretical neuroscience. The connectivity between neurons in the cortex is however not fully random, the simplest and most prominent deviation from randomness found in experimental data being the overrepresentation of bidirectional connections among pyramidal cells. Using numerical and analytical methods, we investigated the effects of partially symmetric connectivity on dynamics in networks of rate units. We considered the two dynamical regimes exhibited by random neural networks: the weak-coupling regime, where the firing activity decays to a single fixed point unless the network is stimulated, and the strong-coupling or chaotic regime, characterized by internally generated fluctuating firing rates. In the weak-coupling regime, we computed analytically for an arbitrary degree of symmetry the auto-correlation of network activity in presence of external noise. In the chaotic regime, we performed simulations to determine the timescale of the intrinsic fluctuations. In both cases, symmetry increases the characteristic asymptotic decay time of the autocorrelation function and therefore slows down the dynamics in the network.

Recurrent neural networks (RNNs) are a class of computational models that are often used as a tool to explain neurobiological phenomena, considering anatomical, electrophysiological and computational constraints.
RNNs can either be designed to implement a certain dynamical principle, or they can be trained by input–output examples. Recently, there has been large progress in utilizing trained RNNs both for computational tasks, and as explanations of neural phenomena. I will review how combining trained RNNs with reverse engineering can provide an alternative framework for modeling in neuroscience, potentially serving as a powerful hypothesis generation tool.
Despite the recent progress and potential benefits, there are many fundamental gaps towards a theory of these networks. I will discuss these challenges and possible methods to attack them.