The EC-DG-CA3-CA1 circuit and the conceptual parallelism with the CS framework (top: hippocampal structure; bottom: abstract graphical representation of hippocampal structure and conceptual parallelism with CS framework). The EC projection to DG and CA3, via the perforant path (PP), represents the matrix A of the CS framework. It is analyzed to matrices Φ and Ψ that are linked with subregions LEC and MEC, respectively. Vector x corresponds to the activity of DG afferents projecting to CA3 (mossy fibers, MF). Input to the CA3 from PP and MF lead to the activation of a CA3 population, the vector y. Backprojection from CA3 to DG performs error correction tasks in order to transform a noisy version of the vector y′ to y while recurrent collaterals within CA3 are assumed to perform association tasks. CA3 projects to CA1 via the Schaffer Collaterals (SC) and the information loop closes by the CA1 feedback to the EC, which also receives the EC input via the Temporoammonic pathway (TA).

The EC-DG-CA3-CA1 circuit and the conceptual parallelism with the CS framework (top: hippocampal structure; bottom: abstract graphical representation of hippocampal structure and conceptual parallelism with CS framework). The EC projection to DG and CA3, via the perforant path (PP), represents the matrix A of the CS framework. It is analyzed to matrices Φ and Ψ that are linked with subregions LEC and MEC, respectively. Vector x corresponds to the activity of DG afferents projecting to CA3 (mossy fibers, MF). Input to the CA3 from PP and MF lead to the activation of a CA3 population, the vector y. Backprojection from CA3 to DG performs error correction tasks in order to transform a noisy version of the vector y′ to y while recurrent collaterals within CA3 are assumed to perform association tasks. CA3 projects to CA1 via the Schaffer Collaterals (SC) and the information loop closes by the CA1 feedback to the EC, which also receives the EC input via the Temporoammonic pathway (TA).

Source publication
Article
Full-text available
Hippocampus is one of the most important information processing units in the brain. Input from the cortex passes through convergent axon pathways to the downstream hippocampal subregions and, after being appropriately processed, is fanned out back to the cortex. Here, we review evidence of the hypothesis that information flow and processing in the...

Similar publications

Article
Full-text available
Associations between co-occurring stimuli are formed in the medial temporal lobe (MTL). Here, we recorded from 508 single and multi-units in the MTL while participants learned and retrieved associations between unfamiliar faces and unfamiliar scenes. Participant's memories for the face-scene pairs were later tested using cued recall and recognition...
Article
Full-text available
The medial entorhinal cortex (MEC) is known to contain spatial encoding neurons that likely contribute to encoding spatial aspects of episodic memories. However, little is known about the role MEC plays in encoding temporal aspects of episodic memories, particularly during immobility. Here using a virtual ‘Door Stop’ task for mice, we show that MEC...

Citations

... Here, we investigate the dynamical origin of the winnertake-all (WTA) competition which leads to sparse activation of the GCs (improving the pattern separation) [28][29][30][31][32][33][34][35][36][37]. We first note that the whole GCs are grouped into the lamellar clusters [38][39][40][41]. ...
Article
Full-text available
We consider a biological network of the hippocampal dentate gyrus (DG). Computational models suggest that the DG would be a preprocessor for pattern separation (i.e., a process transforming a set of similar input patterns into distinct nonoverlapping output patterns) which could facilitate pattern storage and retrieval in the CA3 area of the hippocampus. The main encoding cells in the DG are the granule cells (GCs) which receive the input from the entorhinal cortex (EC) and send their output to the CA3. We note that the activation degree of GCs is very low (∼5%). This sparsity has been thought to enhance the pattern separation. We investigate the dynamical origin for winner-take-all (WTA) competition which leads to sparse activation of the GCs. The whole GCs are grouped into lamellar clusters. In each cluster, there is one inhibitory (I) basket cell (BC) along with excitatory (E) GCs. There are three kinds of external inputs into the GCs: the direct excitatory EC input; the indirect feedforward inhibitory EC input, mediated by the HIPP (hilar perforant path-associated) cells; and the excitatory input from the hilar mossy cells (MCs). The firing activities of the GCs are determined via competition between the external E and I inputs. The E-I conductance ratio R (con) E−I * (given by the time average of the ratio of the external E to I conductances) may represent well the degree of such external E-I input competition. It is thus found that GCs become active when their R (con) E−I * is larger than a threshold R * th , and then the mean firing rates of the active GCs are strongly correlated with R (con) E−I *. In each cluster, the feedback inhibition from the BC may select the winner GCs. GCs with larger R (con) E−I * than the threshold R * th survive, and they become winners; all the other GCs with smaller R (con) E−I * become silent. In this way, WTA competition occurs via competition between the firing activity of the GCs and the feedback inhibition from the BC in each cluster. Finally, we also study the effects of MC death and adult-born immature GCs on the WTA competition.
... The RIP assures the stability of CS and guarantees the proper encode/decode of the spatial information. 15 Hadamard matrices were chosen to be used as sensing matrices in this study because they are widely used in CS-SPI applications with good results. 16,17,18 Hadamard matrices are binary (composed by ± 1 entries) squared matrices with orthogonal rows and columns. ...
... From a reverse engineering perspective, it is highly efficient that an object viewed at the highest frequency be allocated the smallest number of bits in memory. Different from the compression technique in CTOR, this alternative strategy would minimize storage and be akin to compression in the sense of Huffman coding or more recent proposals that suggest the hippocampus is performing even more sophisticated compression techniques (Petrantonakis and Poirazi, 2014). ...
Article
Full-text available
The ventral visual stream (VVS) is a fundamental pathway involved in visual object identification and recognition. In this work, we present a hypothesis of a sequence of computations performed by the VVS during object recognition. The operations performed by the inferior temporal (IT) cortex are represented as not being akin to a neural-network, but rather in-line with a dynamic inference instantiation of the untangling notion. The presentation draws upon a technique for dynamic maximum a posteriori probability (MAP) sequence estimation based on the Viterbi algorithm. Simulation results are presented to show that the decoding portion of the architecture that is associated with the IT can effectively untangle object identity when presented with synthetic data. More importantly, we take a step forward in visual neuroscience by presenting a framework for an inference-based approach that is biologically inspired via attributes implicated in primate object recognition. The analysis will provide insight in explaining the exceptional proficiency of the VVS.
... Another extension of the model in the future would be to impose sparsity constraints on the learning rule in order to render data encoding processes more efficient. Furthermore, such a modification would make the model able to simulate more biological phenomena, such as the sparse compression occurring within the hippocampus [39]. ...
Preprint
Full-text available
Neural networks are commonly trained to make predictions through learning algorithms. Contrastive Hebbian learning, which is a powerful rule inspired by gradient backpropagation, is based on Hebb's rule and the contrastive divergence algorithm. It operates in two phases, the forward (or free) phase, where the data are fed to the network, and a backward (or clamped) phase, where the target signals are clamped to the output layer of the network and the feedback signals are transformed through the transpose synaptic weight matrices. This implies symmetries at the synaptic level, for which there is no evidence in the brain. In this work, we propose a new variant of the algorithm, called random contrastive Hebbian learning, which does not rely on any synaptic weights symmetries. Instead, it uses random matrices to transform the feedback signals during the clamped phase, and the neural dynamics are described by first order non-linear differential equations. The algorithm is experimentally verified by solving a Boolean logic task, classification tasks (handwritten digits and letters), and an autoencoding task. This article also shows how the parameters affect learning, especially the random matrices. We use the pseudospectra analysis to investigate further how random matrices impact the learning process. Finally, we discuss the biological plausibility of the proposed algorithm, and how it can give rise to better computational models for learning.
... All in all, this work presents preliminary accounts on the matter of reconstructing GF from few measurements and proposes possible solutions for better performance in real case and contains original experimental results towards that endeavor. This is the first time this problem is dealt and the proposed methodological framework would have useful applications both on experimental settings, where grid cell activity is recorded, and theoretical contributions concerning the functional interpretation of the Entorhinal Cortex output to the downstream hippocampal regions [28][29][30]. ...
Article
The discovery of the striking tessellating firing fields of the grid cells has boosted research on brain circuits that dynamically represent self-location. The detection of such cells demands long-ran recordings, in order for the whole grid mosaic to be clearly revealed. The scope of this study is to present a methodology for unraveling the complete firing field of a grid cell even when it is poorly represented by the recorded spikes. The proposed approach is based on the fact that the recorded spikes of a grid cell during random navigation in the environment can be considered as a sampling process of the respective whole grid field (GF) seen as a binary image. In this work the Approximate Message Passing algorithm is used to reveal the whole GF image of a grid cell only by a few samples. The proposed approach was tested both in simulated and real data with promising results (mean squared error less than 0.15). The efficiency of the reconstruction process appeared to depend on the rat's route within the environment and on the respective probability of changing route direction in every step. The proposed approach pave the way for efficient methods to detect and identify grid cells. Nevertheless, experimentation with real rats’ sample routes would further enhance the reconstruction efficiency. Overall, this paper tackles, for the first time, the problem of detecting grid cells’ firing fields even if the corresponding recorded spikes do not represent their figure adequately and proposes respective signal processing methodology for its solution.
... Given the temporally punctate nature of neuronal spikes, sparsity has been frequently considered within the schema of efficient coding [4,5]. Notable within the sparse coding framework [6][7][8] are analyses that leverage the theory of compressed sensing (CS) [9][10][11] for underdetermined linear systems (e.g., [12][13][14][15]). CS outlines conditions under which a linear system admits exact and stable recovery (i.e., decoding) of a sparse input [16][17][18][19]. ...
... In general, we do not know where the active columns of B are located at each time a priori. Thus, J s K is defined as the set of all possible matrices satisfying the structure in (12), where the cardinality of this set is m s K . We first establish conditions under which a one to one correspondence exists between sparse input and observed output for the system (3). ...
Article
In this paper, we study how the dynamics of recurrent networks, formulated as general dynamical systems, mediate the recovery of sparse, time-varying signals. Our formulation resembles the well-described problem of compressed sensing, but in a dynamic setting. We specifically consider the problem of recovering a high-dimensional network input, over time, from observation of only a subset of the network states (i.e., the network output). Our goal is to ascertain how the network dynamics may enable recovery, even if classical methods fail at each time instant. We are particularly interested in understanding performance in scenarios where both the input and output are corrupted by disturbance and noise, respectively. Our main results consist of the development of analytical conditions, including a generalized observability criterion, that ensure exact and stable input recovery in a dynamic, recurrent network setting.
... The animal's brain ability to discriminate between similar experiences is a crucial feature of episodic memory. It is believed that information processing in the hippocampus complies with 'compressed sensing theory' (Petrantonakis and Poirazi, 2014). The formation of discrete representations in memory is thought to depend on a pattern separation process whereby cortical inputs are decorrelated as they enter the early stages of the hippocampus (Gilbert et al., 2001;Leutgeb et al., 2007). ...
Article
Full-text available
Information processing in the hippocampus begins by transferring spiking activity of the entorhinal cortex (EC) into the dentate gyrus (DG). Activity pattern in the EC is separated by the DG such that it plays an important role in hippocampal functions including memory. The structural and physiological parameters of these neural networks enable the hippocampus to be efficient in encoding a large number of inputs that animals receive and process in their life time. The neural encoding capacity of the DG depends on its single neurons encoding and pattern separation efficiency. In this study, encoding by the DG is modeled such that single neurons and pattern separation efficiency are measured using simulations of different parameter values. For this purpose, a probabilistic model of single neurons efficiency is presented to study the role of structural and physiological parameters. Known neurons number of the EC and the DG is used to construct a neural network by electrophysiological features of granule cells of the DG. Separated inputs as activated neurons in the EC with different firing probabilities are presented into the DG. For different connectivity rates between the EC and DG, pattern separation efficiency of the DG is measured. The results show that in the absence of feedback inhibition on the DG neurons, the DG demonstrates low separation efficiency and high firing frequency. Feedback inhibition can increase separation efficiency while resulting in very low single neuron's encoding efficiency in the DG and very low firing frequency of neurons in the DG (sparse spiking). This work presents a mechanistic explanation for experimental observations in the hippocampus, in combination with theoretical measures. Moreover, the model predicts a critical role for impaired inhibitory neurons in schizophrenia where deficiency in pattern separation of the DG has been observed.
... Based on this evidence, we propose that the Error term is produced in the CA3 region and is fed back to DG in an effort to find the sparsest population in DG, i.e., the vector x. Then, the algorithm evolves and the next step (each step is considered as the loop DG-CA3-DG) incorporates the sparser projection from DG to CA3 causing a new CA3 representation that is iteratively compared with the initial representation caused by the perforant path [29]. ...
... The dissociation of the approximation imposed by DG-IST may relate to the fact that slight differences in EC input can recruit new GCs, that were initially inactive. In a sparse approximation task, pattern separation refers to the fact that measurements, y 1 and y 2 of different signals, f 1 and f 2 are due to the different representations, x 1 and x 2 (see S1 Text for more information on sparse representations of signals based on a dictionary set C). Thus, it would be interesting to investigate whether DG performs pattern separation in terms of estimating a sparse representation of two slightly different sources of activation in the cortex, as recently proposed [29]. For instance, assuming that the EC input refers to a dictionary C. ...
Article
Full-text available
Memory-related activity in the Dentate Gyrus (DG) is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2-4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm "Iterative Soft Thresholding" (IST) by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG.
... Orderly development, use and refinement of this cortical database requires three major supporting functions that also require some form of learning: Value judgments are required to decide what level of certainty is acceptable for the identity and expected behavior of an unknown object, tentatively ascribed to the basal ganglia (Bornstein and Daw, 2011). If no acceptable identification is possible, then these unreconciled associations of motor strategies and sensory feedback that have been experienced with the unknown object must be remembered and eventually turned into a compressed, efficient representation of a new entity in cerebral cortex, tentatively ascribed to hippocampus (Winocur et al., 2010;Petrantonakis and Poirazi, 2014). The learned, abstract motor strategies need to be coordinated with lower level sensorimotor systems (e.g., spinal cord and brainstem) that can activate and stabilize complex body movements, tentatively ascribed to cerebellum (Thach, 2014). ...
Article
Full-text available
Theories of perception seek to explain how sensory data are processed to identify previously experienced objects, but they usually do not consider the decisions and effort that goes into acquiring the sensory data. Identification of objects according to their tactile properties requires active exploratory movements. The sensory data thereby obtained depend on the details of those movements, which human subjects change rapidly and seemingly capriciously. Bayesian Exploration is an algorithm that uses prior experience to decide which next exploratory movement should provide the most useful data to disambiguate the most likely possibilities. In previous studies, a simple robot equipped with a biomimetic tactile sensor and operated according to Bayesian Exploration performed in a manner similar to and actually better than humans on a texture identification task. Expanding on this, "Bayesian Action&Perception" refers to the construction and querying of an associative memory of previously experienced entities containing both sensory data and the motor programs that elicited them. We hypothesize that this memory can be queried (i) to identify useful next exploratory movements during identification of an unknown entity ("action for perception") or (ii) to characterize whether an unknown entity is fit for purpose ("perception for action") or (iii) to recall what actions might be feasible for a known entity (Gibsonian affordance). The biomimetic design of this mechatronic system may provide insights into the neuronal basis of biological action and perception.
... Layers from top to bottom: cx-cortex, or-oriens, pyr-pyramidale, rad-radiatum, lm-lacunosum moleculare. Modified from(134). ...