Neuronal Circuits Underlying Persistent Representations Despite Time Varying Activity

Janelia Farm Research Campus, Howard Hughes Medical Institute, 19700 Helix Drive, Ashburn, VA 20176, USA.
Current biology: CB (Impact Factor: 9.57). 10/2012; 22(22). DOI: 10.1016/j.cub.2012.08.058
Source: PubMed


Our brains are capable of remarkably stable stimulus representations despite time-varying neural activity. For instance, during delay periods in working memory tasks, while stimuli are represented in working memory, neurons in the prefrontal cortex, thought to support the memory representation, exhibit time-varying neuronal activity. Since neuronal activity encodes the stimulus, its time-varying dynamics appears to be paradoxical and incompatible with stable network stimulus representations. Indeed, this finding raises a fundamental question: can stable representations only be encoded with stable neural activity, or, its corollary, is every change in activity a sign of change in stimulus representation?

Here we explain how different time-varying representations offered by individual neurons can be woven together to form a coherent, time-invariant, representation. Motivated by two ubiquitous features of the neocortex-redundancy of neural representation and sparse intracortical connections-we derive a network architecture that resolves the apparent contradiction between representation stability and changing neural activity. Unexpectedly, this network architecture exhibits many structural properties that have been measured in cortical sensory areas. In particular, we can account for few-neuron motifs, synapse weight distribution, and the relations between neuronal functional properties and connection probability.

We show that the intuition regarding network stimulus representation, typically derived from considering single neurons, may be misleading and that time-varying activity of distributed representation in cortical circuits does not necessarily imply that the network explicitly encodes time-varying properties.

Download full-text


Available from: Dmitri B Chklovskii
  • Source
    • "For example, it is believed that the study of disease in nervous system such as attention deficit hyperactivity disorder (ADHD), bipolar disorder, epilepsy could be explained by using some chaotic models [38] [39] [40]. On the other hand, neuronal activities of neurons have also been investigated by using PSpice tool or artificial neuronal circuit [41] [42] [43] [44] [45] [46] [47] [48] [49] [50]. In the case for circuit implementation, the integrated circuits [51] [52] are often used for building amplifiers or experimental circuit elements, while some researcher prefer to using commercial amplifiers as well because of its availability and easy accessibility. "
    [Show abstract] [Hide abstract]
    ABSTRACT: The Hindmarsh–Rose neuron mode can reproduce the main properties of neuronal activity, and it is effective for dynamical investigation. Neuronal activity can also be verified by using realistic circuits mapped from the theoretical neuronal models. The mode of electrical activity of each neuron is dependent on the external forcing, connection coupling between neurons and noise in the network or external uncertain driving. It is challenging to design reliable but practical artificial neuronal circuits to study the transition of electrical activities of neurons. In this paper, an artificial practical circuit is fabricated to reproduce the electrical activity of neuron with different discharge modes, and detailed elements for this circuit are presented. Additive noise is imposed on single neuronal circuit and the coupled neuronal circuits, and then the noise-induced transition of electrical activity and the occurrence for synchronization between neuronal circuits under optimized noise (moderate noise intensity) are investigated. It is found that noise is helpful to activate quiescent neuronal circuit, and also possible to induce synchronization between coupled neuronal circuits. And this practical neuronal circuit is helpful for further study of collective behaviors of coupled neuronal network.
    Full-text · Article · May 2015 · Communications in Nonlinear Science and Numerical Simulation
  • Source
    • "Finally, operational evidence for representation would be to show that the pattern is stable in time. In principle, a representation -the information about what makes a thing that thing- should not change, and nor should its neural pattern (Druckmann and Chklovskii, 2012). All things being equal, the particular information of “what makes a thing that thing” will always be the necessary and the sufficient information to count as the representation of “the thing,” and as such, it should be stable. "
    [Show abstract] [Hide abstract]
    ABSTRACT: Neural mind-reading studies, based on multivariate pattern analysis (MVPA) methods, are providing exciting new studies. Some of the results obtained with these paradigms have raised high expectations, such as the possibility of creating brain reading devices. However, such hopes are based on the assumptions that: (a) the BOLD signal is a marker of neural activity; (b) the BOLD pattern identified by a MVPA is a neurally sound pattern; (c) the MVPA's feature space is a good mapping of the neural representation of a stimulus, and (d) the pattern identified by a MVPA corresponds to a representation. I examine here the challenges that still have to be met before fully accepting such assumptions.
    Full-text · Article · Jun 2013 · Frontiers in Human Neuroscience
  • [Show abstract] [Hide abstract]
    ABSTRACT: We propose a theory of the early processing in the mammalian visual pathway. The theory is formulated in the language of information theory and hypothesizes that the goal of this processing is to recode in order to reduce a generalized redundancy subject to a constraint that specifies the amount of average information preserved. In the limit of no noise, this theory becomes equivalent to Barlow's redundancy reduction hypothesis, but it leads to very different computational strategies when noise is present. A tractable approach for finding the optimal encoding is to solve the problem in successive stages where at each stage the optimization is performed within a restricted class of transfer functions. We explicitly find the solution for the class of encodings to which the parvocellular retinal processing belongs, namely linear and nondivergent transformations. The solution shows agreement with the experimentally observed transfer functions at all levels of signal to noise.
    No preview · Article · Sep 1990 · Neural Computation
Show more