Article

From the neuron doctrine to neural networks

Authors:
To read the full-text of this research, you can request a copy directly from the author.

Abstract

For over a century, the neuron doctrine - which states that the neuron is the structural and functional unit of the nervous system - has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the author.

... 19 given that functions correspond to specific structures accompanying activation patterns. 30 Thus, it can be the first step of building the brain symbolization blueprint to find the general structural rules to match the hierarchical structures with functions. Such work is similar to connecting hardware with software via computer programming languages. ...
... This is because both the brain and the computer have a common aspect to represent external environments into data. 30,32,33 This perspective is consistent with the brain-computer metaphor, not that single neuron might act like logic gates from computer science, but that some neuron circuits are bound to serve as mnemonic symbols from assembly languages where arrangements of machinery on-off signals (zero or one) become symbolized into data. 34 (figure 3) In other words, the brain cannot conduct the information-processing functions-creating, processing, saving, and utilizing information-without transforming external physical environments into data. ...
... Single neuron has been investigated as a computational unit in a brain or the nervous system. 30,50 With the advent of technology development, understanding of neurons' functions and structures has been deepen. 30 Four structural types of neurons-unipolar, bipolar, multipolar, and pseudounipola-were classified and 100 more than neurotransmitters were identified. ...
Preprint
Full-text available
To understand how a brain works, neuroscience have emerged and progressed tremendous studies on neurons. Among them, there are many important achievements including the discovery of neural structures and functions. It was found that brain information is transported via neural synapses, and the synapse formation generates memory. Furthermore, artificial intelligence using the neural network model has reached to imitate high cognitive functions of humans. However, understanding fundamental principles of neural computation and plasticity still seem overwhelming and ambitious. We still do not know how the brain deals with information and how neural circuits reflect external environments, which indicates that general research strategy applied to other biological organs is not effective enough. To ameliorate this contradictory situation, here, I discuss that the difference between the brain and other organs stems from the high level of functional abstractions, especially the symbolizations that make a discrepancy between physical and functional boundaries in neural networks. And I review crucial findings in neuroscience in terms of how we can reduce the discrepancy by building a brain symbolization blueprint.
... Assuming that the dynamical error changes more quickly than the average dynamics, using the adiabatic elimination method, let dY/ dt = 0, then Y 2 = 1−k−3X 2 and Y = 0 can be obtained. Considering the case where the two oscillators are not synchronized, Y 2 = 1−k−3X 2 can be substituted into equation (3). The solution is: ...
... When x 0 (0) = x 1 (0) = ± 1, we substitute Y = 0 into equation (3). Then equation (3) can be represented as: ...
Article
Full-text available
Weak signal amplification is extremely important in biological systems (such as brain, nerve cells, gene regulatory networks), which relates to signal coding and processing. This study incorporates intra-layer coupling within a simple Y-shaped unidirectional chain as a means of exploring the effect of small changes in the network structure on the propagation of weak signals, where the nodes in the chain are treated as bistable oscillators. It is found that when the intra-layer coupling is below a critical threshold and the inter-layer coupling is at a medium range, the transmission of signals in the initial layer is significantly enhanced. This study also reveals the amplification mechanism of weak signals in the Y-shaped unidirectional chain. The successful amplification of weak signals in the Y-shaped unidirectional chain is related to the synchronization state of the feed-forward oscillators and the amplitude of oscillators. The signal amplification is also determined by the frequency of the input signals. Furthermore, this paper has contrasted numerical simulations with analytical calculations in this simplified topology. This research contributes to understanding the relationship between network structure and weak signal transmission.
... Calcium imaging analysis has unveiled recurrent activations within neuronal networks 13 , including Up-Down state activity 14 . These networks can range in size, from a few cells in cultured environments 13 , to extensive populations exceeding millions in brain slices or in-vivo settings 15 . ...
... Calcium imaging analysis has unveiled recurrent activations within neuronal networks 13 , including Up-Down state activity 14 . These networks can range in size, from a few cells in cultured environments 13 , to extensive populations exceeding millions in brain slices or in-vivo settings 15 . In contrast, much less is known about such recurrent activity in astrocytes, particularly regarding the local connectivity patterns. ...
Article
Full-text available
Astrocytes form extensive networks with diverse calcium activity, yet the organization and connectivity of these networks across brain regions remain largely unknown. To address this, we developed AstroNet, a data-driven algorithm that uses two-photon calcium imaging to map temporal correlations in astrocyte activation. By organizing individual astrocyte activation events chronologically, our method reconstructs functional networks and extracts local astrocyte correlations. We create a graph of the astrocyte network by tallying direct co-activations between pairs of cells along these activation pathways. Applied to the CA1 hippocampus and motor cortex, AstroNet reveals notable differences: astrocytes in the hippocampus display stronger connectivity, while cortical astrocytes form sparser networks. In both regions, smaller, tightly connected sub-networks are embedded within a larger, loosely connected structure. This method not only identifies astrocyte activation paths and connectivity but also reveals distinct, region-specific network patterns, providing new insights into the functional organization of astrocytic networks in the brain.
... Here it may be good to note that there is growing awareness among neuroscientists that to understand mental representation one needs to look at the way neurons are organized in ensembles (Yuste, 2015;Carrillo-Reid, 2022). That is to say, it has long been the assumption that a response will always require the activation of a sufficient number of neurons that (together) code for the same thing, and that information processing requires both pattern completion and pattern separation, but researchers have only just started to discover the mechanisms that can explain these phenomena (e.g. ...
... In the previous section we saw that there is good reason to assume that the functional unit in neural processes is the neuronal ensemble rather than the individual neuron (Yuste, 2015;Carrillo-Reid, 2022). If that is true than it makes sense for philosophers to see the mental phenomenon that these ensembles underlie as a natural kind. ...
Preprint
Full-text available
That computationalists have always had problems explaining intentionality may well have to do with their view of information processing as the manipulation of symbols and/or the following of an algorithm. That is why here I explore the possibility that humans and other intelligent animals are more like analog systems, meaning that the representational and the implementational levels of analysis collapse into one. Concretely, I take the recent studies on neural representation as a starting point to get a better picture of information processing in natural systems. I conclude that at least part of the information that is carried by a neural representation is 'reiterated' in the representations it is connected to. After all, neural representations are not moveable tokens. They can only become more or less active, and more or less connected. Moreover, this happens automatically, as representations covary with the phenomena they represent. Additionally, I argue that the fact that neurons come in structured ensembles may explain firstly, how representations can activate each other and for instance generate inferences outside of awareness, and secondly, how they can be activated in different degrees so that the competition between the structures they constitute can be won by the most relevant one. Thirdly, I argue there are many innate or prepared concepts that are relatively complex and that may serve as the building blocks of our core concepts such as agent and cause. And lastly, I point at a problem for radical enactivists, namely that even a relatively simple response, such as freezing in the case of learned fears, involves representations in the part of the brain that processes semantic information. I conclude that computationalism is still viable, but we should let go of the idea that information processing is medium-independent.
... Complex animal behavior is believed to stem from the electrical activity of coordinated ensembles of neurons within specific brain circuits [1], [2]. For example, during sensory perception [3], [4] and motor coordination [5]- [7], correlated patterns of electrical activity in groups of neurons are observed in the primary sensory and motor cortices [8]- [10]. ...
... • Drifting gratings: A full field drifting sinusoidal grating at a spatial frequency of 0.04 cycles/degree was presented at 8 different directions (from 0°to 315°, separated by 45°), at 5 temporal frequencies (1,2,4,8,15 Hz Each image was presented 50 times, in random order, and the response period was evaluated in 0.5 seconds after the stimulus onset. The experimental settings is depicted in Fig. A-1 ...
Preprint
Full-text available
Understanding complex animal behaviors hinges on deciphering the neural activity patterns within brain circuits, making the ability to forecast neural activity crucial for developing predictive models of brain dynamics. This capability holds immense value for neuroscience, particularly in applications such as real-time optogenetic interventions. While traditional encoding and decoding methods have been used to map external variables to neural activity and vice versa, they focus on interpreting past data. In contrast, neural forecasting aims to predict future neural activity, presenting a unique and challenging task due to the spatiotemporal sparsity and complex dependencies of neural signals. Existing transformer-based forecasting methods, while effective in many domains, struggle to capture the distinctiveness of neural signals characterized by spatiotemporal sparsity and intricate dependencies. To address this challenge, we here introduce QuantFormer, a transformer-based model specifically designed for forecasting neural activity from two-photon calcium imaging data. Unlike conventional regression-based approaches, QuantFormerreframes the forecasting task as a classification problem via dynamic signal quantization, enabling more effective learning of sparse neural activation patterns. Additionally, QuantFormer tackles the challenge of analyzing multivariate signals from an arbitrary number of neurons by incorporating neuron-specific tokens, allowing scalability across diverse neuronal populations. Trained with unsupervised quantization on the Allen dataset, QuantFormer sets a new benchmark in forecasting mouse visual cortex activity. It demonstrates robust performance and generalization across various stimuli and individuals, paving the way for a foundational model in neural signal prediction.
... For example, Golgi stains and patch-clamp electrophysiology highlight individual neurons. This conceptual focus on individual neurons has led to a compartmentalization of knowledge that has obscured, to some extent, our ability to integrate data on how individual functions enable higher-order processes (Yuste, 2015). Moreover, the reductionist bias and a reliance on big data or methods-driven approaches in neuroscience has left us with many descriptions, but few explanations (Krakauer et al., 2017). ...
... Just as technological innovations drove the prior decades of neuroscience discoveries, there is similarly a new era of advances tied to developments in modern computing which have been a driving force behind progress in systems biology and integrative systems neuroscience (Kanter et al., 2022;Yuste, 2015). High-throughput sequencing, advanced neuroimaging techniques, and powerful computational tools have enabled the collection and analysis of vast amounts of data, propelling our understanding of neural circuits to new heights. ...
Article
Full-text available
Introduction Neuroscientists have traditionally taken a reductionist approach to understanding the immense complexity of nervous systems. As is the case in other fields of biology, the method of reducing nervous systems into their constitutive parts has proven useful for understanding neural circuits and how they function. As a result, modern neuroscience has thrived on cataloging and scrutinizing individual components of complicated neural systems. However, substantial gaps persist in understanding how these disparate components connect and interact to generate higher-order functions. Bridging these gaps requires a concerted effort to integrate knowledge across sub-fields in neuroscience and, more broadly, across biology. Systems biology is a scientific approach used to examine complex biological processes at the level of systems, rather than focusing on individual discrete parts (Kitano, 2002; Mesarovic, 1968). A “system” is a group of mutually dependent components that work together to form a unified whole. The goal of a systems approach is to understand a holistic big picture in the context of integrated systems that are dynamic and interrelated. By taking a systems biology approach to understanding the nervous system, we can attempt to integrate and understand interactions between the different neural components that give rise to higher-order emergent phenomena (Geschwind and Konopka, 2009; Grillner et al., 2005). The struggle between understanding individual parts and the larger whole has been a part of neuroscience since its origin as a scientific discipline. Over a century ago, the field was shaped by the opposing theories of two leading neuroanatomists, Santiago Ramón y Cajal and Camillo Golgi. On the one hand, Golgi’s reticular doctrine posited that the nervous system was an interconnected nerve network (“a large syncytium”) that was seamless and continuous (Glickstein, 2012). In contrast, Cajal proposed the neuron doctrine which stated that individual nerve cells were the basic structural and functional units of the nervous system (Cajal, 1888; Cajal, 1899). The structural evidence from the microscopes and stains available to scientists at the time supported Cajal’s neuron doctrine. In fact, it was actually Golgi’s la reazione nera or “black reaction” (now known as the Golgi stain) that produced the most convincing structural evidence that neurons were structurally separated elements. The introduction of the electron microscope in the 1940s definitively demonstrated that neurons were not continuous but were instead distinct entities separated by synapses with extracellular space in between them (Palay, 1956; Porter et al., 1945). While both Ramón y Cajal and Golgi were awarded the Nobel Prize in 1906 for their work on the structure of the nervous system, it was Ramón y Cajal who would widely be considered as the founder of modern neuroscience, and his neuron doctrine has long served as a foundation for the field. Perhaps because of this foundation on the neuron doctrine, many of the workhorse techniques and methods in modern neuroscience have been catered to investigating individual components that make up neural circuits. For example, Golgi stains and patch-clamp electrophysiology highlight individual neurons. This conceptual focus on individual neurons has led to a compartmentalization of knowledge that has obscured, to some extent, our ability to integrate data on how individual functions enable higher-order processes (Yuste, 2015). Moreover, the reductionist bias and a reliance on big data or methods-driven approaches in neuroscience has left us with many descriptions, but few explanations (Krakauer et al., 2017). As a result, what is generally lacking in the field are accepted theories of nervous system function that explain how individual neurons or groups of neurons (e.g., circuits) contribute to neural systems that then give rise to behavior, cognition, or other emergent properties of nervous systems.
... This hypothesis provides a potential avenue for understanding the selective activation patterns in astrocytic networks during various physiological processes. Calcium imaging analysis has unveiled recurrent activations within neuronal networks [13], including Up-Down state activity [14]. These networks can range in size, from a few cells in cultured environments [13], to extensive populations exceeding millions in brain slices or in-vivo settings [15]. ...
... Calcium imaging analysis has unveiled recurrent activations within neuronal networks [13], including Up-Down state activity [14]. These networks can range in size, from a few cells in cultured environments [13], to extensive populations exceeding millions in brain slices or in-vivo settings [15]. In contrast, much less is known about such recurrent activity in astrocytes, particularly regarding the local connectivity patterns. ...
Preprint
Full-text available
Astrocytes form extended intercellular networks, displaying complex calcium activity. However, the specific organization of these astrocytic networks and the precise extent of their functional connectivity in different brain areas remain unexplored. To unveil the functional architecture of astrocytic networks, we developed, using a data-driven methodology, a novel algorithm called AstroNet that uses two-photon calcium imaging to map temporal correlations in activation events among neighboring astrocytes. Our approach involves reconstructing functional astrocytic networks by organizing individual astrocyte activation events chronologically. This chronological order creates activity paths that enable the extraction of local astrocyte functional correlations. Ultimately, by tallying the occurrences of direct co-activations between pairs of cells along these pathways, we construct a graph that mirrors the underlying astrocyte functional network. By applying this method to two distinct brain regions (CA1 hippocampus and motor cortex), we identified notable differences in local network organizations in sub-regions of around 20-40 astrocytes. Specifically, the cortex exhibited a lower connectivity, while astrocytes in the hippocampus displayed stronger connections. Moreover, we found that in both regions, astrocytic networks consist of smaller, tightly connected sub-networks embedded within a larger, more loosely connected one. Altogether, our innovative method enables the identification of activation paths among astrocytes, facilitates the characterization of local network functional connectivity, and quantifies distinct connectivity patterns among astrocytes from different brain regions. This approach sheds light on the heterogeneous functional organization of astrocytic networks within the brain, pointing to region-specific astrocyte connectivity.
... Tuning is a fundamental characteristic of neurons across brain areas Kriegeskorte and Wei, 2021;Tsouli et al., 2022). Neurons do not work in isolation, but as part of ensembles forming networks (Sakurai, 1996;Harris, 2005;Buzsáki, 2010;Yuste, 2015;Bharmauria et al., 2016a). Neuronal firing is inherently noisy by nature, depending on several factors such as the task, state of the animal, methodological concerns and others (Faisal et al., 2008;McDonnell and Ward, 2011;Barth and Poulet, 2012;Molotchnikoff and Rouat, 2012). ...
... Analogically, if the brain is viewed as a 'self-organizing' machine working at the edge of chaos and criticality (Bob, 2007;Ponzi, 2017;Zhuravlev, 2023), the observed results can be interpreted as follows. Building on Donald Hebbs' cell assembly theory (Hebb, 1949), a neuron ensemble selects specific connections to encode stimuli / behavior from a random set of possible connections (Edelman, 1987;Buzsáki, 2010;Singer, 2013;Yuste, 2015;Bharmauria et al., 2016a). Ketamine-induced over-connectivity disrupts network selectivity, creating a chaotic state by breaking down established neural assemblies (Buzsáki, 2010). ...
... The untangling framework has been extended to address the structure of neural populations that represent object categories more generally by characterizing the geometry of the high-dimensional representational manifolds 34,35,[49][50][51][52] . The capacity and dynamics of representational geometries in visual cortex correlate with classification behavior 42,53,54 , in parietal and prefrontal cortices with perceptual decision-making 23,[55][56][57] , and in motor cortex with control of muscle activity [58][59][60][61] . ...
Article
Full-text available
In natural visually guided behavior, observers must separate relevant information from a barrage of irrelevant information. Many studies have investigated the neural underpinnings of this ability using artificial stimuli presented on blank backgrounds. Natural images, however, contain task-irrelevant background elements that might interfere with the perception of object features. Recent studies suggest that visual feature estimation can be modeled through the linear decoding of task-relevant information from visual cortex. So, if the representations of task-relevant and irrelevant features are not orthogonal in the neural population, then variation in the task-irrelevant features would impair task performance. We tested this hypothesis using human psychophysics and monkey neurophysiology combined with parametrically variable naturalistic stimuli. We demonstrate that (1) the neural representation of one feature (the position of an object) in visual area V4 is orthogonal to those of several background features, (2) the ability of human observers to precisely judge object position was largely unaffected by those background features, and (3) many features of the object and the background (and of objects from a separate stimulus set) are orthogonally represented in V4 neural population responses. Our observations are consistent with the hypothesis that orthogonal neural representations can support stable perception of object features despite the richness of natural visual scenes. Supplementary Information The online version contains supplementary material available at 10.1038/s41598-025-88910-8.
... This limitation has impeded our understanding of their functional heterogeneity and their involvement in networks that drive brain function and behavior 3 . Although it is accepted that neural circuit function and brain representations arise from the activation of ensembles of neurons 4 , how or to what extent astrocytes exhibit a similar functional and circuit specialization remains unresolved. To assess the cognitive role of astrocytes in complex brain circuits, it is essential to identify and manipulate the activity of functionally specified astrocyte subsets, independently of their location and molecular identity. ...
Article
Full-text available
Astrocytes, dynamic cells crucial to brain function, have traditionally been overshadowed by the emphasis on neuronal activity in regulating behavior. Unlike neurons, which are organized into ensembles that encode different brain representations, astrocytes have long been considered a homogeneous population. This is partly because of the lack of tools available to map and manipulate specific subsets of astrocytes based on their functional activity, obscuring the extent of their specialization in circuits. Here, using AstroLight, a tool that translates astrocytic activity-mediated calcium signals into gene expression in a light-dependent manner, we have identified an astrocytic ensemble, a functionally specified subset of astrocytes that emerges upon activity during cue-motivated behaviors in the nucleus accumbens, an integrator hub in the reward system. Furthermore, through gain-of-function and loss-of-function manipulations, we demonstrate that this ensemble is essential for modulating cue–reward associations. These findings highlight the specialization of astrocytes into ensembles and their fine-tuning role in shaping salient behavior.
... [4][5][6][7][8][9] Under this limitation, considerable attempts have been made toward understanding how the brain processes information using a variety of developing theoretical frameworks. [10][11][12][13][14][15] One of the analytic frameworks developed within the last two decades is state-space analysis, 16 which provides a mechanistic structure of information processed in the lower-dimensional space of a neural population. [17][18][19] This analytical tool identifies dynamic neural population structures that reflect information processing for general biological features 20,21 and allowed us to describe those features as a neural geometry with high temporal-resolution [13][14][15] in the sub-second order. ...
... These microcolumns, consisting of hundreds of neurons, represent localized processing units within the cortex. They exhibit dense intra-connectivity within the column and sparser inter-columnar connections, mirroring the modularity seen at larger scales [20]. Such mesoscopic modular structures enable localized processing while simultaneously contributing to the broader functional architecture of the brain [21], [22], reinforcing the principle of modularity across different levels of neural organization. ...
Preprint
Full-text available
Modularity is a fundamental organizational principle in complex networks, including the brain, where it supports scalability, flexibility, and robustness. This study examines the influence of modularity on information capacity in neural networks, with a specific focus on the interplay between excitatory and inhibitory connectivity in balanced networks. Using a computational model of neuronal networks, we evaluate the information capacity of different modular architectures composed of excitatory and inhibitory neurons, with varying probabilities of connections type both within and between modules while maintaining a global balance between excitation and inhibition. By analyzing the networks' dynamical states with different levels of external inputs, we explore how different connectivity patterns shape the network's information capacity. Our findings indicate that global long-range excitation drives the system into periodic states, resulting in minimal information content. Conversely, a combination of inter-module excitatory and inhibitory interconnections generates correlated activity among modules, limiting the scaling of global information with the number of modules. In contrast, exclusive inhibitory interconnections foster uncorrelated, intermittent activity within modules, maximizing information capacity both locally and across the entire network. This study highlights the importance of differential connectivity patterns in excitatory and inhibitory synapses, as well as modular organization, in optimizing the brain's information processing capabilities. Our findings provide valuable insights into the mechanisms underlying neural dynamics and their role in efficient information processing.
... Low-frequency theta oscillations are ideal for coordinating the neural activity between distal regions (Fries, 2015;Helfrich and Knight, 2016;Phillips et al., 2014). Oscillations temporally coordinate the activity between two regions to facilitate information transfer (Canolty and Knight, 2010;Yuste, 2015). In the absence of long-range synchrony, inputs may arrive at random phases of the excitability cycle hindering effective communication between regions. ...
Preprint
Full-text available
Debilitating anxiety is pervasive in the modern world. Choices to approach or avoid are common in everyday life and excessive avoidance is a cardinal feature of all anxiety disorders. Here, we used intracranial EEG to define a distributed prefrontal-limbic circuit dynamics supporting approach and avoidance. Presurgical epilepsy patients (n=20) performed an approach-avoidance conflict decision-making task inspired by the arcade game Pac-Man, where participants trade-off real-time harvesting rewards with potential losses from attack. As patients approached increasing rewards and threats, we found evidence of a limbic circuit mediated by increased theta power in the hippocampus, amygdala, orbitofrontal cortex (OFC) and anterior cingulate cortex (ACC), which then drops rapidly during avoidance. Theta band connectivity between these regions increases during approach and falls during avoidance, with OFC serving as a connector in this circuit with high theta coherence across limbic regions, but also with regions outside of the limbic system, including the lateral prefrontal cortex. Importantly, the degree of OFC-driven connectivity predicts how long participants approach, with enhanced network synchronicity extending approach times. Finally, under ghost attack, the system dynamically switches to a sustained increase in high-frequency activity (70-150Hz) in the middle frontal gyrus (MFG), marking the retreat from the ghost. The results provide evidence for a distributed prefrontal-limbic circuit, mediated by theta oscillations, underlying approach-avoidance conflict.
... The first half of the 20th century moves from viewing neurons as isolated units to appreciating them as components of an intricate network [75]. A crucial turn point is the study of synaptic transmission, which elucidates inter-neuronal communication. ...
... The brain is a powerful computing machine, "trained" by millions of years of evolution to process, represent, and interpret the thousands of incoming stimuli it is exposed to on a daily basis. The prevailing hypothesis suggests that the brain encodes information about such external inputs through patterns of neural spiking activity in sensory areas, often observed to reside within lower-dimensional manifolds (1)(2)(3), which constitute an internal representation of the external world (4,5). ...
Preprint
Full-text available
The brain encodes external stimuli through patterns of neural activity, forming internal representations of the world. Recent experiments show that neural representations for a given stimulus change over time. However, the mechanistic origin for the observed "representational drift" (RD) remains unclear. Here, we propose a biologically-realistic computational model of the piriform cortex to study RD in the mammalian olfactory system by combining two mechanisms for the dynamics of synaptic weights at two separate timescales: spontaneous fluctuations on a scale of days and spike-time dependent plasticity (STDP) on a scale of seconds. Our study shows that, while spontaneous fluctuations in synaptic weights induce RD, STDP-based learning during repeated stimulus presentations can reduce it. Our model quantitatively explains recent experiments on RD in the olfactory system and offers a mechanistic explanation for the emergence of drift and its relation to learning, which may be useful to study RD in other brain regions.
... Detailed examinations of the subcellular roots of population activity is important in understanding network activity (Buzsáki et al., 2012;Yuste, 2015). Examining single-unit neuronal activity from a specific brain region and comparing to low-frequency recordings of the local field potential creates a reductionist description of network-mediated brain function (Buzsáki and Draguhn, 2004). ...
Article
Full-text available
Astrocytes are active cells involved in brain function through the bidirectional communication with neurons, in which astrocyte calcium plays a crucial role. Synaptically evoked calcium increases can be localized to independent subcellular domains or expand to the entire cell, i.e., calcium surge. Because a single astrocyte may contact ~100,000 synapses, the control of the intracellular calcium signal propagation may have relevant consequences on brain function. Yet, the properties governing the spatial dynamics of astrocyte calcium remains poorly defined. Imaging subcellular responses of cortical astrocytes to sensory stimulation in mice, we show that sensory-evoked astrocyte calcium responses originated and remained localized in domains of the astrocytic arborization, but eventually propagated to the entire cell if a spatial threshold of >23% of the arborization being activated was surpassed. Using Itpr2 -/- mice, we found that type-2 IP 3 receptors were necessary for the generation of astrocyte calcium surge. We finally show using in situ electrophysiological recordings that the spatial threshold of the astrocyte calcium signal consequently determined the gliotransmitter release. Present results reveal a fundamental property of astrocyte physiology, i.e., a spatial threshold for astrocyte calcium propagation, which depends on astrocyte intrinsic properties and governs astrocyte integration of local synaptic activity and subsequent neuromodulation.
... Ensembles of neurons amalgamate into physiological regions and yield functional properties and states in the brain 36 . The human brain system (including learning, memory, reasoning, thought, feeling, emotion, vision, hearing, etc) is constructed from (piecewise) continuous combination of finite number of the signal transferring relationships of neurons ( ) and the neurotransmission relationships within their corresponding synapses ( ). ...
Preprint
Full-text available
Artificial Intelligence (AI) has apparently become one of the most important techniques discovered by humans in history while the human brain is widely recognized as one of the most complex systems in the universe. One fundamental critical question which would affect human sustainability remains open: Will artificial intelligence (AI) evolve to surpass human intelligence in the future? This paper shows that in theory new AI twins with fresh cellular level of AI techniques for neuroscience could approximate the brain and its functioning systems (e.g. perception and cognition functions) with any expected small error and AI without restrictions could surpass human intelligence with probability one in the end. This paper indirectly proves the validity of the conjecture made by Frank Rosenblatt 70 years ago about the potential capabilities of AI, especially in the realm of artificial neural networks. Intelligence is just one of fortuitous but sophisticated creations of the nature which has not been fully discovered. Like mathematics and physics, with no restrictions artificial intelligence would lead to a new subject with its self-contained systems and principles. We anticipate that this paper opens new doors for 1) AI twins and other AI techniques to be used in cellular level of efficient neuroscience dynamic analysis, functioning analysis of the brain and brain illness solutions; 2) new worldwide collaborative scheme for interdisciplinary teams concurrently working on and modelling different types of neurons and synapses and different level of functioning subsystems of the brain with AI techniques; 3) development of low energy of AI techniques with the aid of fundamental neuroscience properties; and 4) new controllable, explainable and safe AI techniques with reasoning capabilities of discovering principles in nature.
... The application of the theory of causal emergence to fields such as neuroscience is already underway . Specifically, it may help solve longstanding problems in neuroscience involving scale, such as the debate over whether brain circuitry functions at the scale of neural ensembles or individual neurons (Buxhoeveden and Casanova 2002;Yuste 2015). It has also been proposed that the brain integrates information at a higher level (Tononi 2008) and it was proven that integrated information can indeed peak at a macroscale in . ...
Preprint
The causal structure of any system can be analyzed at a multitude of spatial and temporal scales. It has long been thought that while higher scale (macro) descriptions of causal structure may be useful to observers, they are at best a compressed description and at worse leave out critical information. However, recent research applying information theory to causal analysis has shown that the causal structure of some systems can actually come into focus (be more informative) at a macroscale (Hoel et al. 2013). That is, a macro model of a system (a map) can be more informative than a fully detailed model of the system (the territory). This has been called causal emergence. While causal emergence may at first glance seem counterintuitive, this paper grounds the phenomenon in a classic concept from information theory: Shannon's discovery of the channel capacity. I argue that systems have a particular causal capacity, and that different causal models of those systems take advantage of that capacity to various degrees. For some systems, only macroscale causal models use the full causal capacity. Such macroscale causal models can either be coarse-grains, or may leave variables and states out of the model (exogenous) in various ways, which can improve the model's efficacy and its informativeness via the same mathematical principles of how error-correcting codes take advantage of an information channel's capacity. As model choice increase, the causal capacity of a system approaches the channel capacity. Ultimately, this provides a general framework for understanding how the causal structure of some systems cannot be fully captured by even the most detailed microscopic model.
... 1. Introduction. In the past decades, single-neuron recordings have been complemented by multineuronal experimental techniques, which have provided quantitative evidence that the cells forming the nervous systems are coupled both structurally [8] and functionally (for a recent review, see [75] and references therein). An important question in neuroscience concerns the relationship between electrical activity at the level of individual neurons and the emerging spatio-temporal coherent structures observed experimentally using local field potential recordings [22], functional magnetic resonance imaging [69] and electroencephalography [58]. ...
Preprint
We study coarse pattern formation in a cellular automaton modelling a spatially-extended stochastic neural network. The model, originally proposed by Gong and Robinson [36], is known to support stationary and travelling bumps of localised activity. We pose the model on a ring and study the existence and stability of these patterns in various limits using a combination of analytical and numerical techniques. In a purely deterministic version of the model, posed on a continuum, we construct bumps and travelling waves analytically using standard interface methods from neural fields theory. In a stochastic version with Heaviside firing rate, we construct approximate analytical probability mass functions associated with bumps and travelling waves. In the full stochastic model posed on a discrete lattice, where a coarse analytic description is unavailable, we compute patterns and their linear stability using equation-free methods. The lifting procedure used in the coarse time-stepper is informed by the analysis in the deterministic and stochastic limits. In all settings, we identify the synaptic profile as a mesoscopic variable, and the width of the corresponding activity set as a macroscopic variable. Stationary and travelling bumps have similar meso- and macroscopic profiles, but different microscopic structure, hence we propose lifting operators which use microscopic motifs to disambiguate between them. We provide numerical evidence that waves are supported by a combination of high synaptic gain and long refractory times, while meandering bumps are elicited by short refractory times.
... Recent advances in multi-electrode recording techniques allow simultaneous measurements of neural activity from a large population of interacting neurons [1,2]. A population of neurons encodes various information by its collective spiking activity patterns, namely, neural codewords [3]. ...
Preprint
A network of neurons in the central nervous system collectively represents information by its spiking activity states. Typically observed states, i.e., codewords, occupy only a limited portion of the state space due to constraints imposed by network interactions. Geometrical organization of codewords in the state space, critical for neural information processing, is poorly understood due to its high dimensionality. Here, we explore the organization of neural codewords using retinal data by computing the entropy of codewords as a function of Hamming distance from a particular reference codeword. Specifically, we report that the retinal codewords in the state space are divided into multiple distinct clusters separated by entropy-gaps, and that this structure is shared with well-known associative memory networks in a recallable phase. Our analysis also elucidates a special nature of the all-silent state. The all-silent state is surrounded by the densest cluster of codewords and located within a reachable distance from most codewords. This codeword-space structure quantitatively predicts typical deviation of a state-trajectory from its initial state. Altogether, our findings reveal a non-trivial heterogeneous structure of the codeword-space that shapes information representation in a biological network.
... To uncover the neural circuit mechanisms underlying animal behavior, e.g., working memory or decision making, is a fundamental issue in systems neuroscience [1,2]. Recent developments in multi-neuron recording methods make simultaneous recording of neuronal population activity possible, which gives rise to the challenging computational tasks of finding basic circuit variables responsible for the observed collective behavior of neural populations [3]. ...
Preprint
To understand the collective spiking activity in neuronal populations, it is essential to reveal basic circuit variables responsible for these emergent functional states. Here, I develop a mean field theory for the population coupling recently proposed in the studies of visual cortex of mouse and monkey, relating the individual neuron activity to the population activity, and extend the original form to the second order, relating neuron-pair's activity to the population activity, to explain the high order correlations observed in the neural data. I test the computational framework on the salamander retinal data and the cortical spiking data of behaving rats. For the retinal data, the original form of population coupling and its advanced form can explain a significant fraction of two-cell correlations and three-cell correlations, respectively. For the cortical data, the performance becomes much better, and the second order population coupling reveals non-local effects in local cortical circuits.
... First, distinct cortical states were defined based on spontaneous population activity. Analyzing neuronal populations can uncover complex dynamics, emergent properties, and intricate patterns of neural communication that are often not apparent at the single-neuron level (Yuste, 2015). However, individual neurons might deviate from the overall pattern of population activity. ...
... Ensembles of neurons amalgamate into physiological regions and yield functional properties and states in the brain 36 . The human brain system (including learning, memory, reasoning, thought, feeling, emotion, vision, hearing, etc) is constructed from (piecewise) continuous combination of finite number of the signal transferring relationships of neurons ( ) and the neurotransmission relationships within their corresponding synapses ( ). ...
Article
Full-text available
Artificial Intelligence (AI) has apparently become one of the most important techniques discovered by humans in history while the human brain is widely recognized as one of the most complex systems in the universe. One fundamental critical question which would affect human sustainability remains open: Will artificial intelligence (AI) evolve to surpass human intelligence in the future? This paper shows that in theory new AI twins with fresh cellular level of AI techniques for neuroscience could approximate the brain and its functioning systems (e.g. perception and cognition functions) with any expected small error and AI without restrictions could surpass human intelligence with probability one in the end. This paper indirectly proves the validity of the conjecture made by Frank Rosenblatt 70 years ago about the potential capabilities of AI, especially in the realm of artificial neural networks. This paper also gives the answer to the two widely discussed fundamental questions: 1) whether AI could have potentials of discovering new principles in nature; 2) whether error backpropagation (BP) algorithm commonly and efficiently used in tuning parameters in AI applications is also adopted in the brain. Intelligence is just one of fortuitous but sophisticated creations of the nature which has not been fully discovered. Like mathematics and physics, with no restrictions artificial intelligence would lead to a new subject with its self-contained systems and principles. We anticipate that this paper opens new doors for 1) AI twins and other AI techniques to be used in cellular level of efficient neuroscience dynamic analysis, functioning analysis of the brain and brain illness solutions; 2) new worldwide collaborative scheme for interdisciplinary teams concurrently working on and modelling different types of neurons and synapses and different level of functioning subsystems of the brain with AI techniques; 3) development of low energy of AI techniques with the aid of fundamental neuroscience properties; and 4) new controllable, explainable and safe AI techniques with reasoning capabilities of discovering principles in nature.
... The human nervous system supports cognitive functions such as cognition, decision-making, and consciousness (Yuste, 2015;Qi et al., 2019;Zhu et al., 2020). Neurological damage often permanently impairs physiological functions-such as causing paralysis following spinal cord injury or impairing speech and motor functions after a stroke-with these disabilities persisting throughout the patient's lifetime, self-repair is essentially impossible. ...
Article
Full-text available
Based on electrophysiological activity, neuroprostheses can effectively monitor and control neural activity. Currently, electrophysiological neuroprostheses are widely utilized in treating neurological disorders, particularly in restoring motor, visual, auditory, and somatosensory functions after nervous system injuries. They also help alleviate inflammation, regulate blood pressure, provide analgesia, and treat conditions such as epilepsy and Alzheimer’s disease, offering significant research, economic, and social value. Enhancing the targeting capabilities of neuroprostheses remains a key objective for researchers. Modeling and simulation techniques facilitate the theoretical analysis of interactions between neuroprostheses and the nervous system, allowing for quantitative assessments of targeting efficiency. Throughout the development of neuroprostheses, these modeling and simulation methods can save time, materials, and labor costs, thereby accelerating the rapid development of highly targeted neuroprostheses. This article introduces the fundamental principles of neuroprosthesis simulation technology and reviews how various simulation techniques assist in the design and performance enhancement of neuroprostheses. Finally, it discusses the limitations of modeling and simulation and outlines future directions for utilizing these approaches to guide neuroprosthesis design.
... This is largely because there are several difficulties in testing the hypothesis that representations continue to evolve during overtraining in cortex. First, for many animals, recording neurons is invasive and risks damaging the animals, so recordings are typically only taken at the end of training to study the end-time learned representation [Yuste, 2015, Kim et al., 2016. Second, experiments in systems neuroscience do not typically design stimuli as "training" and "test" in the way datasets are constructed in deep learning, so making claims about learning and generalization is not possible. ...
Preprint
Full-text available
Does learning of task-relevant representations stop when behavior stops changing? Motivated by recent theoretical advances in machine learning and the intuitive observation that human experts continue to learn from practice even after mastery, we hypothesize that task-specific representation learning can continue, even when behavior plateaus. In a novel reanalysis of recently published neural data, we find evidence for such learning in posterior piriform cortex of mice following continued training on a task, long after behavior saturates at near-ceiling performance ("overtraining"). This learning is marked by an increase in decoding accuracy from piriform neural populations and improved performance on held-out generalization tests. We demonstrate that class representations in cortex continue to separate during overtraining, so that examples that were incorrectly classified at the beginning of overtraining can abruptly be correctly classified later on, despite no changes in behavior during that time. We hypothesize this hidden yet rich learning takes the form of approximate margin maximization; we validate this and other predictions in the neural data, as well as build and interpret a simple synthetic model that recapitulates these phenomena. We conclude by showing how this model of late-time feature learning implies an explanation for the empirical puzzle of overtraining reversal in animal learning, where task-specific representations are more robust to particular task changes because the learned features can be reused.
... A comprehensive study across various simulated and experimental systems, including star and complex networks of Rössler systems, coupled hysteresis-based electronic oscillators, microcircuits of leaky integrateand-fire model neurons, and recordings from in-vitro cultures of spontaneously-growing neuronal networks, shows that denser connectivity tends to locally enhance the emergence of stronger nonlinear dynamics signatures [28]. In 2015, Rafael Yuste [29] discussed the emergence and success of the neuron doctrine, attributing its development to the use of single-neuron techniques. He also highlighted the limitations arising from its narrow focus on neural circuits and subsequently examined the rise of neural network models, particularly in light of insights gained from new multineuronal recording methods. ...
Article
Full-text available
The application of nonlinear dynamics in neuroscience has provided essential insights into the functioning of the brain, shedding light on its complex behaviors and functions. This review reflects on the remarkable developments achieved in computational neuroscience, an emerging discipline that examines how the brain operates in terms of the information-processing features that constitute the nervous system’s construction. This comprehensive review covers a broad range of topics, including physiology, nonlinear dynamical methods for analyzing neural computation involving neural oscillations, synchronization events, chaotic dynamics, and network connections, brain circuit simulation, and tools for neuromorphic computation. In addition, we look at how nonlinear dynamics might help us comprehend neurological disorders, cognitive functioning, and the dynamical behavior of neural networks.
... No entanto, é necessário um esforço contínuo para superar os desafios técnicos, científicos e éticos associados ao seu uso. (Yuste, 2015). ...
Article
Full-text available
O uso de células-tronco pluripotentes induzidas (iPSCs) na neurociência do século 21 apresenta promessas e desafios significativos. Essas células, reprogramadas a partir de células somáticas adultas, têm o potencial de modelar doenças neurodegenerativas, como Alzheimer, Parkinson e Esclerose Lateral Amiotrófica, permitindo uma compreensão mais profunda dos mecanismos patológicos subjacentes. Além disso, as iPSCs oferecem oportunidades para o desenvolvimento de novas terapias, incluindo a triagem de medicamentos e a terapia celular, visando corrigir defeitos genéticos e restaurar a função neuronal. No entanto, o uso de iPSCs na neurociência enfrenta diversos desafios técnicos, científicos e éticos. A diferenciação eficiente das iPSCs em tipos celulares específicos do sistema nervoso central, a reprodutibilidade dos resultados e a garantia de segurança das terapias baseadas em iPSCs são algumas das questões críticas a serem abordadas. Além disso, preocupações éticas relacionadas à origem das células e à manipulação genética devem ser cuidadosamente consideradas. Apesar desses desafios, avanços significativos têm sido feitos na criação de modelos celulares mais sofisticados, como organoides cerebrais, que recapitulam características complexas do cérebro humano em desenvolvimento e em doenças. Integração de abordagens multidisciplinares, como inteligência artificial e big data, também pode oferecer insights valiosos para avançar na compreensão e tratamento de doenças neurológicas. Em suma, as iPSCs representam uma ferramenta poderosa na neurociência moderna, oferecendo novas oportunidades para elucidar os mecanismos das doenças neurodegenerativas e desenvolver terapias mais eficazes. No entanto, é necessário um esforço contínuo para superar os desafios técnicos, científicos e éticos associados ao seu uso.
... Recall that an assembly is a stable set of highly intraconnected neurons in an area representing through their near simultaneous excitation a real-world object, episode, or idea (Hebb, 1949;Harris et al., 2003;Buzsáki, 2019). There is a growing consensus in neuroscience that assemblies of neurons play an important role in the way the brain works (Buzsáki, 2010;Huyck & Passmore, 2013;Yuste, 2015;Eichenbaum, 2018). It was established in Papadimitriou et al. (2020) and subsequent research, through both mathematics and simulation, that certain elementary behaviors of assemblies arise in NEMO: projection, association, and merge, among others. ...
Article
Full-text available
Even as machine learning exceeds human-level performance on many applications, the generality, robustness, and rapidity of the brain’s learning capabilities remain unmatched. How cognition arises from neural activity is the central open question in neuroscience, inextricable from the study of intelligence itself. A simple formal model of neural activity was proposed in Papadimitriou et al. (2020) and has been subsequently shown, through both mathematical proofs and simulations, to be capable of implementing certain simple cognitive operations via the creation and manipulation of assemblies of neurons. However, many intelligent behaviors rely on the ability to recognize, store, and manipulate temporal sequences of stimuli (planning, language, navigation, to list a few). Here we show that in the same model, sequential precedence can be captured naturally through synaptic weights and plasticity, and, as a result, a range of computations on sequences of assemblies can be carried out. In particular, repeated presentation of a sequence of stimuli leads to the memorization of the sequence through corresponding neural assemblies: upon future presentation of any stimulus in the sequence, the corresponding assembly and its subsequent ones will be activated, one after the other, until the end of the sequence. If the stimulus sequence is presented to two brain areas simultaneously, a scaffolded representation is created, resulting in more efficient memorization and recall, in agreement with cognitive experiments. Finally, we show that any finite state machine can be learned in a similar way, through the presentation of appropriate patterns of sequences. Through an extension of this mechanism, the model can be shown to be capable of universal computation. Taken together, these results provide a concrete hypothesis for the basis of the brain’s remarkable abilities to compute and learn, with sequences playing a vital role.
... This further confirms the potential involvement of a central nervous mechanism in hypertension pathogenesis. With deeper studies in neurology, scientists have found that neural function is not solely reliant on individual neurons but rather on complex neural networks composed of neuronal clusters (10). The dynamic balance of excitation and inhibition (E/I) is necessary for ensuring the optimal functionality of neural networks (11,12). ...
Article
Full-text available
Despite the increasing number of anti-hypertensive drugs have been developed and used in the clinical setting, persistent deficiencies persist, including issues such as lifelong dosage, combination therapy. Notwithstanding receiving the treatment under enduring these deficiencies, approximately 4 in 5 patients still fail to achieve reliable blood pressure (BP) control. The application of neuromodulation in the context of hypertension presents a pioneering strategy for addressing this condition, con-currently implying a potential central nervous mechanism underlying hypertension onset. We hypothesize that neurological networks, an essential component of maintaining appropriate neurological function, are involved in hypertension. Drawing on both peer-reviewed research and our laboratory investigations, we endeavor to investigate the underlying neural mechanisms involved in hypertension by identifying a close relationship between its onset of hypertension and an excitation and inhibition (E/I) imbalance. In addition to the involvement of excitatory glutamatergic and GABAergic inhibitory system, the pathogenesis of hypertension is also associated with Voltage-gated sodium channels (VGSCs, Nav)-mediated E/I balance. The overloading of glutamate or enhancement of glutamate receptors may be attributed to the E/I imbalance, ultimately triggering hypertension. GABA loss and GABA receptor dysfunction have also proven to be involved. Furthermore, we have identified that abnormalities in sodium channel expression and function alter neural excitability, thereby disturbing E/I balance and potentially serving as a mechanism underlying hypertension. These insights are expected to furnish potential strategies for the advancement of innovative anti-hypertensive therapies and a meaningful reference for the exploration of central nervous system (CNS) targets of anti-hypertensives.
... The current results extend this to show for the first time that an assembly of dopamine neurons can function to represent the content of errors, even outside the realm of value. That the same information available in the pattern of activity is not readily apparent in the activity of individual neurons is in accord with ideas guiding behavioral neurophysiology in other areas (Yuste, 2015), and suggests it is time to consider the functions of the dopamine system across rather than within individual neurons. ...
... • From single neuron to neuron population Neuroscientists once focused on studying individual neurons but multi-neural recording methods now enable the study of ensembles of neurons [Yuste, 2015]. In interpretability, Anthropic's recent paper represents a similar shift from individual neurons to groups of neurons [Bricken et al., 2023]. ...
Preprint
Full-text available
As deep learning systems are scaled up to many billions of parameters, relating their internal structure to external behaviors becomes very challenging. Although daunting, this problem is not new: Neuroscientists and cognitive scientists have accumulated decades of experience analyzing a particularly complex system - the brain. In this work, we argue that interpreting both biological and artificial neural systems requires analyzing those systems at multiple levels of analysis, with different analytic tools for each level. We first lay out a joint grand challenge among scientists who study the brain and who study artificial neural networks: understanding how distributed neural mechanisms give rise to complex cognition and behavior. We then present a series of analytical tools that can be used to analyze biological and artificial neural systems, organizing those tools according to Marr's three levels of analysis: computation/behavior, algorithm/representation, and implementation. Overall, the multilevel interpretability framework provides a principled way to tackle neural system complexity; links structure, computation, and behavior; clarifies assumptions and research priorities at each level; and paves the way toward a unified effort for understanding intelligent systems, may they be biological or artificial.
... For numerous natural systems, their spatiotemporal dynamics can be well described by their eigenmodes in a unified framework, where the eigenmodes represent the fundamental resonant patterns determined by their internal structures. Many findings from optical imaging and neuroelectrode recording revealed that brain activities are spatially organized within neural networks orchestrated by axonal connections containing both excitatory and inhibitory synapses 11,12 . In the emerging field of neuro-manifolds, observations at various spatial scales further demonstrated that, despite the vast number of neurons in the brain, their potential activity patterns on the axonal scaffold are limited to a finite set of spatial modes that capture a substantial portion of collective behaviors emerging from aggregated neurons [13][14][15] . ...
Preprint
Full-text available
Functional magnetic resonance imaging (fMRI) has been widely employed for brain function mapping by localizing neural activities evoked by conditional stimuli, with temporal variations in fMRI signals modeled as superpositions of linear hemodynamic responses to these stimuli. However, this standard model overlooks the brain’s nonlinear responses to external stimuli and inherent spatial organizations of brain activities. The dynamics of many natural systems can be well described by their eigenmodes, which are the fundamental resonant patterns determined by their internal structures. Here, we explore the potential of characterizing brain dynamics in a framework that integrates temporal and spatial profiles of brain activities using an eigenmode paradigm. Specifically, spatiotemporally coordinated neural dynamics are represented as local excitations of brain eigenmodes derived from functional connectomes that reflect the brain’s axonal connection structures. We found that eigenmode-represented signals can reliably characterize subtle spatiotemporal trajectories of brain dynamics. Through this representation, we further reveal the widespread existence of non-linear brain responses, which potentially bias the results of conventional analyses. To map brain functions with non-linearity explicitly considered, we exploited the similarity in eigenmode-represented signals measured under repeated stimuli to identify evoked brain responses. Our findings demonstrate that cognitive tasks elicit more extensive engagements of brain networks than those detected using linear models, unveiling previously underappreciated neural architectures underlying specific brain functions. Moreover, our results of function mapping exhibit significantly enhanced individual-level repeatability and accelerated convergence to group consensus, enabling investigations into personalized brain function and potentially advancing applications of fMRI that target individual brain variations.
Article
Neural implants allow to decipher brain functions as well as to design brain–computer interfaces that aim to compensate for the loss of functions after a brain injury. After two decades of development worldwide, the efficiency of the recording/stimulating implants is now sufficient to consider their transfer to clinical applications. Nevertheless, this translation is slowed down by a lack of proof of long‐term efficacy. Consequently, strong research effort is currently dedicated to obtain devices that can record and stimulate for decades. The major sources of failure that are identified are the delamination of the insulation and contact layers of the implant as well as the formation of cracks in the substrate. One of the main challenges is thus to find novel highly stable substrates and insulators. Diamond is an interesting candidate to meet this requirement, offering both high biocompatibility and resistance to corrosion. Herein, thin diamond layers are combined with polyimide substrate to form the backbone of a neural implant and its functionality is assessed. The microelectrodes are made of titanium nitride on platinum coated with poly(3,4‐ethylenedioxythiophene):poly(styrene sulfonate). This new type of neural implant is successfully tested in vivo to measure cortical auditory‐evoked activity in rats.
Article
Full-text available
This review was inspired by a January 2024 conference held at Friday Harbor Laboratories, WA, honoring the pioneering work of A.O. Dennis Willows, who initiated research on the sea slug Tritonia diomedea (now T. exsulans). A chance discovery while he was a student at a summer course there has, over the years, led to many insights into the roles of identified neurons in neural circuits and their influence on behavior. Among Dennis's trainees was Peter Getting, whose later groundbreaking work on central pattern generators profoundly influenced the field and included one of the earliest uses of realistic modeling for understanding neural circuits. Research on Tritonia has led to key conceptual advances in polymorphic or multifunctional neural networks, intrinsic neuromodulation, and the evolution of neural circuits. It also has enhanced our understanding of geomagnetic sensing, learning and memory mechanisms, prepulse inhibition, and even drug-induced hallucinations. Although the community of researchers studying Tritonia has never been large, its contributions to neuroscience have been substantial, underscoring the importance of examining a diverse array of animal species rather than focusing on a small number of standard model organisms.
Chapter
The hypothesis that certain psychiatric or neurological diseases can be best understood—and therefore treated—at a systems level is an attractive one. For testing such a hypothesis, animal models are useful, and the ability to meaningfully compare multicellular activity between brains is needed. Identifying when population-level functional dynamics are typical or healthy, and when they are aberrant or pathological, is not straightforward, especially when there exists no within-animal baseline state to which to compare. This chapter focuses on practical considerations for applying popular ensemble-identification analyses (such as tSNE, SVM, k-means) to comparisons between groups, such as one might carry out between a wild-type animal and an animal with a genetic mutation affecting a disease-relevant pathway. While the methods are many, the principles are few(er). We focus first and foremost on the choice of metric: precisely what is thought to differ between groups? What aspect of multineuronal activity—stability, number, size, diversity, etc.—might be depleted or augmented in the disease state of interest? And could other behavioral or neural properties, such as arousal, motor output, or baseline neural firing rates, better account for change in your chosen metric, rather than a specific loss or disorganization of neural ensembles? We provide an example analysis pipeline wherein these points are considered.
Chapter
We present a novel and scalable approach to accurately identify neuronal ensembles in spiking neuron populations. This method, which is part of a previously published methodology, uses minimal parameter tuning requirements and improved computational efficiency, making it a valuable tool for researchers studying complex ensemble activity in neural circuits. Using clustering of synchronous activities, our methodology allows neurons to be part of multiple ensembles, demonstrating its effectiveness across a wide range of simulation parameters. We also demonstrate the versatility of this technique, applying it to both artificially generated data and spike trains obtained from retinal ganglion cells via multielectrode array recordings. A comparison of the results reveals the superior performance of the method and its wider applicability compared to other prevalent techniques in the field. Our investigations uncover a consistent pattern of stimuli-induced activity, in addition to spontaneously active and sporadic ensembles. These findings suggest the potential compartmentalization of the early visual system into specific functional ensembles. To make this innovative method more accessible and easier to use, we provide a user-friendly graphical interface in this chapter. Our goal is to provide readers with a comprehensive understanding and the resources needed to implement a reliable technique, thus advancing our collective knowledge of the intricate functional networks of the central nervous system.
Chapter
Neuronal populations in vivo are characterized by the recurring coactivation of specific groups of neurons, referred to as neuronal assemblies. These assemblies are thought to constitute fundamental units for brain computations. Identifying neuronal assemblies through statistical analysis of simultaneous recordings of large neuronal populations is crucial. Here, we describe a computationally fast algorithm based on dimensionality reduction techniques, developed to detect neuronal assemblies and analyze their dynamics. Importantly, it allows for overlap between the detected assemblies, where neurons are able to transiently participate in multiple assemblies. We show that this method has been successfully applied for the analysis of calcium imaging experiments involving thousands of simultaneously recorded neurons, and its accuracy and scalability is supported by benchmarking studies on simulated data.
Chapter
A hallmark of neural population activity is neural assemblies, groups of neurons that consistently coactivate. Quantitative analysis of these assemblies requires reliable and objective methods for their detection and extraction from recordings of neural population activity, increasingly in the form of calcium imaging data. Here we discuss an algorithm which achieves this goal. The basic idea of this approach is to form a similarity graph of population activity patterns with a high level of coactivity. Methods developed for community detection in graphs can then be applied to obtain a statistical estimate for the number of assemblies, followed by extraction via standard clustering methods. Expanding on the original MATLAB implementation, here we explain the application of this algorithm to example data using a more recent and efficient Python implementation.
Chapter
Nanoneurosurgery represents a groundbreaking paradigm in the central nervous system (CNS) therapeutic arena, providing unprecedented precision and potential for direct intervention. This review delves into the transformative potential of molecular nanoneurosurgery, with a particular focus on its application in treating conditions such as renovascular hypertension (RVHT) and intracerebral hemorrhage (ICH) in a rat model. Utilizing the self-assembling peptide (RADA)4, we demonstrate the therapeutic efficacy of this nanomaterial in mitigating hematoma expansion, reducing cell apoptosis and inhibiting inflammatory responses post-ICH. The surgical methodology employed encompasses a comprehensive sequence from the induction of RVHT, selection criteria based on systolic blood pressure, ICH induction, blood clot aspiration, and precise administration of (RADA)4, to the subsequent evaluations of hematoma volume, cell death, and inflammatory markers. The results highlight a significant reduction in hematoma volume, TUNEL-positive cells, and iNOS-immunoreactive cells in the (RADA)4-treated group, showcasing the material’s protective and therapeutic potential. While this study sheds light on the promising applications of nanoneurosurgery, it also underscores the need for further research and development in this domain to enhance the precision, efficacy, and safety of such nanomaterials in clinical settings. Moreover, nanoneurosurgery emerges as a pioneering approach in the realm of nerve repair, presenting innovative solutions with enhanced precision and potential for functional restoration. This review meticulously examines the advancements and applications of nanoneurosurgery, with a distinct emphasis on optic nerve repair, a challenging yet crucial domain within neurosurgery. We explore the utilization of self-assembling peptides such as (RADA)4, elucidating its role in promoting nerve regeneration and functional recovery in models of optic nerve injury. Through a comprehensive analysis of surgical methodologies, this study highlights the intricate procedures involved in the administration of nanomaterials, emphasizing their therapeutic efficacy in mitigating damage, reducing inflammation, and enhancing neuronal regeneration. The outcomes underscore a significant improvement in nerve function and structural integrity, marking a promising step toward the development of effective treatments for optic nerve injuries. Additionally, the review discusses the broader implications of nanoneurosurgery in the central nervous system (CNS), showcasing its potential to address a spectrum of neurological disorders. The results emphasize the need for ongoing research, standardized protocols, and safety evaluations to fully harness the potential of nanoneurosurgery, ensuring its successful translation from experimental models to clinical practice.
Article
Full-text available
The gonadotropin-releasing hormone (GnRH) neurons operate as a neuronal ensemble exhibiting coordinated activity once every reproductive cycle to generate the preovulatory GnRH surge. Using GCaMP fiber photometry at the GnRH neuron distal dendrons to measure the output of this widely scattered population in female mice, we find that the onset, amplitude, and profile of GnRH neuron surge activity exhibits substantial variability from cycle to cycle both between and within individual mice. This was also evident when measuring successive proestrous luteinizing hormone surges. Studies combining short (c-Fos and c-Jun) and long (genetic robust activity marking) term indices of immediate early gene activation revealed that, while ∼50% of GnRH neurons were activated at the time of each surge, only half of these neurons had been active during the previous proestrous surge. These observations reveal marked inter- and intra-individual variability in the GnRH surge mechanism. Remarkably, different subpopulations of overlapping GnRH neurons are recruited to the ensemble each estrous cycle to generate the GnRH surge. While engendering variability in the surge mechanism itself, this likely provides substantial robustness to a key event underlying mammalian reproduction.
Article
Full-text available
A general mathematical description of how the brain sequentially encodes knowledge remains elusive. We propose a linear solution for serial learning tasks, based on the concept of mixed selectivity in high-dimensional neural state spaces. In our framework, neural representations of items in a sequence are projected along a “geometric” mental line learned through classical conditioning. The model successfully solves serial position tasks and explains behaviors observed in humans and animals during transitive inference tasks amidst noisy sensory input and stochastic neural activity. This approach extends to recurrent neural networks performing motor decision tasks, where the same geometric mental line correlates with motor plans and modulates network activity according to the symbolic distance between items. Serial ordering is thus predicted to emerge as a monotonic mapping between sensory input and behavioral output, highlighting a possible pivotal role for motor-related associative cortices in transitive inference tasks.
Article
Full-text available
Electrical neural interfaces provide direct communication pathways between living brain tissue and engineered devices to understand brain function. However, conventional neural probes have remained limited in providing stable, long-lasting recordings because of large mechanical and structural mismatches with respect to brain tissue. The development of flexible probes provides a promising approach to tackle these challenges. In this review, various structural designs of flexible intracortical probes for promoting long-term neural integration, including thin film filament and mesh probe structures that provide similar geometric and mechanical properties to brain tissue and self-deployable probe structure that enables moving the functional sensors away from the insertion trauma, are summarized, highlighting the important role of structural design in improving the long-term recording stability of neural probes.
Article
Ionic current levels of identified neurons vary substantially across individual animals. Yet, under similar conditions, neural circuit output can be remarkably similar, as evidenced in many motor systems. All neural circuits are influenced by multiple neuromodulators, which provide flexibility to their output. These neuromodulators often overlap in their actions by modulating the same channel type or synapse, yet have neuron-specific actions resulting from distinct receptor expression. Because of this different receptor expression pattern, in the presence of multiple convergent neuromodulators, a common downstream target would be activated more uniformly in circuit neurons across individuals. We therefore propose that a baseline tonic (non-saturating) level of comodulation by convergent neuromodulators can reduce interindividual variability of circuit output. We tested this hypothesis in the pyloric circuit of the crab, Cancer borealis . Multiple excitatory neuropeptides converge to activate the same voltage-gated current in this circuit, but different subsets of pyloric neurons have receptors for each peptide. We quantified the interindividual variability of the unmodulated pyloric circuit output by measuring the activity phases, cycle frequency, and intraburst spike number and frequency. We then examined the variability in the presence of different combinations and concentrations of three neuropeptides. We found that at mid-level concentration (30 nM) but not at near-threshold (1 nM) or saturating (1 µM) concentrations, comodulation by multiple neuropeptides reduced the circuit output variability. Notably, the interindividual variability of response properties of an isolated neuron was not reduced by comodulation, suggesting that the reduction of output variability may emerge as a network effect.
Article
Full-text available
A new family of highly fluorescent indicators has been synthesized for biochemical studies of the physiological role of cytosolic free Ca2+. The compounds combine an 8-coordinate tetracarboxylate chelating site with stilbene chromophores. Incorporation of the ethylenic linkage of the stilbene into a heterocyclic ring enhances the quantum efficiency and photochemical stability of the fluorophore. Compared to their widely used predecessor, “quin2”, the new dyes offer up to 30-fold brighter fluorescence, major changes in wavelength not just intensity upon Ca2+ binding, slightly lower affinities for Ca2+, slightly longer wavelengths of excitation, and considerably improved selectivity for Ca2+ over other divalent cations. These properties, particularly the wavelength sensitivity to Ca2+, should make these dyes the preferred fluorescent indicators for many intracellular applications, especially in single cells, adherent cell layers, or bulk tissues.
Article
Full-text available
Previous studies have reported that some neurons in the inferior temporal (IT) cortex respond selectively to highly specific complex objects. In the present study, we conducted the first systematic survey of the responses of IT neurons to both simple stimuli, such as edges and bars, and highly complex stimuli, such as models of flowers, snakes, hands, and faces. If a neuron responded to any of these stimuli, we attempted to isolate the critical stimulus features underlying the response. We found that many of the responsive neurons responded well to virtually every stimulus tested. The remaining, stimulus-selective cells were often selective along the dimensions of shape, color, or texture of a stimulus, and this selectivity was maintained throughout a large receptive field. Although most IT neurons do not appear to be "detectors" for complex objects, we did find a separate population of cells that responded selectively to faces. The responses of these cells were dependent on the configuration of specific face features, and their selectivity was maintained over changes in stimulus size and position. A particularly high incidence of such cells was found deep in the superior temporal sulcus. These results indicate that there may be specialized mechanisms for the analysis of faces in IT cortex.
Article
Full-text available
Two dynamical models, proposed by Hopfield and Little to account for the collective behavior of neural networks, are analyzed. The long-time behavior of these models is governed by the statistical mechanics of infinite-range Ising spin-glass Hamiltonians. Certain configurations of the spin system, chosen at random, which serve as memories, are stored in the quenched random couplings. The present analysis is restricted to the case of a finite number p of memorized spin configurations, in the thermodynamic limit. We show that the long-time behavior of the two models is identical, for all temperatures below a transition temperature Tc. The structure of the stable and metastable states is displayed. Below Tc, these systems have 2p ground states of the Mattis type: Each one of them is fully correlated with one of the stored patterns. Below T∼0.46Tc, additional dynamically stable states appear. These metastable states correspond to specific mixings of the embedded patterns. The thermodynamic and dynamic properties of the system in the cases of more general distributions of random memories are discussed.
Article
Full-text available
Significance What are the origins of resting-state functional connectivity patterns? One dominating view is that they index ongoing cognitive processes. However, this conclusion is in conflict with studies showing that long-range functional connectivity persists after loss of consciousness, possibly reflecting structural connectivity maps. In this work we respond to this question showing that in fact both sources have a clear and separable contribution to resting-state patterns. We show that under anesthesia, the dominating functional configurations have low information capacity and lack negative correlations. Importantly, they are rigid, tied to the anatomical map. Conversely, wakefulness is characterized by the dynamical exploration of a rich, flexible repertoire of functional configurations. These dynamical properties constitute a signature of consciousness.
Article
Full-text available
We describe an all-optical strategy for simultaneously manipulating and recording the activity of multiple neurons with cellular resolution in vivo. We performed simultaneous two-photon optogenetic activation and calcium imaging by coexpression of a red-shifted opsin and a genetically encoded calcium indicator. A spatial light modulator allows tens of user-selected neurons to be targeted for spatiotemporally precise concurrent optogenetic activation, while simultaneous fast calcium imaging provides high-resolution network-wide readout of the manipulation with negligible optical cross-talk. Proof-of-principle experiments in mouse barrel cortex demonstrate interrogation of the same neuronal population during different behavioral states and targeting of neuronal ensembles based on their functional signature. This approach extends the optogenetic toolkit beyond the specificity obtained with genetic or viral approaches, enabling high-throughput, flexible and long-term optical interrogation of functionally defined neural circuits with single-cell and single-spike resolution in the mouse brain in vivo.
Article
Full-text available
Significance This study demonstrates that neuronal groups or ensembles, rather than individual neurons, are emergent functional units of cortical activity. We show that in the presence and absence of visual stimulation, cortical activity is dominated by coactive groups of neurons forming ensembles. These ensembles are flexible and cannot be accounted for by the independent firing properties of neurons in isolation. Intrinsically generated ensembles and stimulus-evoked ensembles are similar, with one main difference: Whereas intrinsic ensembles recur at random time intervals, visually evoked ensembles are time-locked to stimuli. We propose that visual stimuli recruit endogenously generated ensembles to represent visual attributes.
Article
Full-text available
We introduce a scanless optical method to image neuronal activity in three dimensions simultaneously. Using a spatial light modulator and a custom-designed phase mask, we illuminate and collect light simultaneously from different focal planes and perform calcium imaging of neuronal activity in vitro and in vivo. This method, combining structured illumination with volume projection imaging, could be used as a technological platform for brain activity mapping.
Article
Full-text available
The discrimination and production of temporal patterns on the scale of hundreds of milliseconds are critical to sensory and motor processing. Indeed, most complex behaviours, such as speech comprehension and production, would be impossible in the absence of sophisticated timing mechanisms. Despite the importance of timing to human learning and cognition, little is known about the underlying mechanisms, in particular whether timing relies on specialized dedicated circuits and mechanisms or on general and intrinsic properties of neurons and neural circuits. Here, we review experimental data describing timing and interval-selective neurons in vivo and in vitro. We also review theoretical models of timing, focusing primarily on the state-dependent network model, which proposes that timing in the subsecond range relies on the inherent time-dependent properties of neurons and the active neural dynamics within recurrent circuits. Within this framework, time is naturally encoded in populations of neurons whose pattern of activity is dynamically changing in time. Together, we argue that current experimental and theoretical studies provide sufficient evidence to conclude that at least some forms of temporal processing reflect intrinsic computations based on local neural network dynamics.
Article
Full-text available
Prefrontal cortex is thought to have a fundamental role in flexible, context-dependent behaviour, but the exact nature of the computations underlying this role remains largely unknown. In particular, individual prefrontal neurons often generate remarkably complex responses that defy deep understanding of their contribution to behaviour. Here we study prefrontal cortex activity in macaque monkeys trained to flexibly select and integrate noisy sensory inputs towards a choice. We find that the observed complexity and functional roles of single neurons are readily understood in the framework of a dynamical process unfolding at the level of the population. The population dynamics can be reproduced by a trained recurrent neural network, which suggests a previously unknown mechanism for selection and integration of task-relevant inputs. This mechanism indicates that selection and integration are two aspects of a single dynamical process unfolding within the same prefrontal circuits, and potentially provides a novel, general framework for understanding context-dependent computations.
Article
Full-text available
Recent efforts in neuroscience research have been aimed at obtaining detailed anatomical neuronal wiring maps as well as information on how neurons in these networks engage in dynamic activities. Although the entire connectivity map of the nervous system of Caenorhabditis elegans has been known for more than 25 years, this knowledge has not been sufficient to predict all functional connections underlying behavior. To approach this goal, we developed a two-photon technique for brain-wide calcium imaging in C. elegans, using wide-field temporal focusing (WF-TeFo). Pivotal to our results was the use of a nuclear-localized, genetically encoded calcium indicator, NLS-GCaMP5K, that permits unambiguous discrimination of individual neurons within the densely packed head ganglia of C. elegans. We demonstrate near-simultaneous recording of activity of up to 70% of all head neurons. In combination with a lab-on-a-chip device for stimulus delivery, this method provides an enabling platform for establishing functional maps of neuronal networks.
Article
Full-text available
Can You Trust Your Memory? Being highly imaginative animals, humans constantly recall past experiences. These internally generated stimuli sometimes get associated with concurrent external stimuli, which can lead to the formation of false memories. Ramirez et al. (p. 387 ; see the cover) identified a population of cells in the dentate gyrus of the mouse hippocampus that encoded a particular context and were able to generate a false memory and study its neural and behavioral interactions with true memories. Optogenetic reactivation of memory engram–bearing cells was not only sufficient for the behavioral recall of that memory, but could also serve as a conditioned stimulus for the formation of an associative memory.
Article
Full-text available
Fluorescent calcium sensors are widely used to image neural activity. Using structure-based mutagenesis and neuron-based screening, we developed a family of ultrasensitive protein calcium sensors (GCaMP6) that outperformed other sensors in cultured neurons and in zebrafish, flies and mice in vivo. In layer 2/3 pyramidal neurons of the mouse visual cortex, GCaMP6 reliably detected single action potentials in neuronal somata and orientation-tuned synaptic calcium transients in individual dendritic spines. The orientation tuning of structurally persistent spines was largely stable over timescales of weeks. Orientation tuning averaged across spine populations predicted the tuning of their parent cell. Although the somata of GABAergic neurons showed little orientation tuning, their dendrites included highly tuned dendritic segments (5-40-µm long). GCaMP6 sensors thus provide new windows into the organization and dynamics of neural circuits over multiple spatial and temporal scales.
Article
Full-text available
In this Historical Perspective, we ask what information is needed beyond connectivity diagrams to understand the function of nervous systems. Informed by invertebrate circuits whose connectivities are known, we highlight the importance of neuronal dynamics and neuromodulation, and the existence of parallel circuits. The vertebrate retina has these features in common with invertebrate circuits, suggesting that they are general across animals. Comparisons across these systems suggest approaches to study the functional organization of large circuits based on existing knowledge of small circuits.
Book
Behavioral Neurobiology starts off with an introduction. The next chapter presents the fundamentals of neurobiology. The text also gives a brief history on the study of animal behavior and its neural basis, and examines orienting movements. Other topics covered include active orientation and localization, the neuronal control of motor output, the neuronal processing of sensory information, and sensorimotor integration. The text then goes on to consider neuromodulation and the accommodation of motivational changes in behavior, circadian rhythms and biological clocks, and large-scale navigation in terms of migration and homing, communication, and cellular mechanisms of learning and memory.
Book
A leading neurobiologist explores the fundamental function of dendritic spines in neural circuits by analyzing different aspects of their biology, including structure, development, motility, and plasticity. Most neurons in the brain are covered by dendritic spines, small protrusions that arise from dendrites, covering them like leaves on a tree. But a hundred and twenty years after spines were first described by Ramón y Cajal, their function is still unclear. Dozens of different functions have been proposed, from Cajal's idea that they enhance neuronal interconnectivity to hypotheses that spines serve as plasticity machines, neuroprotective devices, or even digital logic elements. In Dendritic Spines, leading neurobiologist Rafael Yuste attempts to solve the “spine problem,” searching for the fundamental function of spines. He does this by examining many aspects of spine biology that have fascinated him over the years, including their structure, development, motility, plasticity, biophysical properties, and calcium compartmentalization. Yuste argues that we may never understand how the brain works without understanding the specific function of spines. In this book, he offers a synthesis of the information that has been gathered on spines (much of which comes from his own studies of the mammalian cortex), linking their function with the computational logic of the neuronal circuits that use them. He argues that once viewed from the circuit perspective, all the pieces of the spine puzzle fit together nicely into a single, overarching function. Yuste connects these two topics, integrating current knowledge of spines with that of key features of the circuits in which they operate. He concludes with a speculative chapter on the computational function of spines, searching for the ultimate logic of their existence in the brain and offering a proposal that is sure to stimulate discussions and drive future research.
Book
Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field. How do groups of neurons interact to enable the organism to see, decide, and move appropriately? What are the principles whereby networks of neurons represent and compute? These are the central questions probed by The Computational Brain. Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field. The Computational Brain is the first unified and broadly accessible book to bring together computational concepts and behavioral data within a neurobiological framework. Computer models constrained by neurobiological data can help reveal how—networks of neurons subserve perception and behavior—bow their physical interactions can yield global results in perception and behavior, and how their physical properties are used to code information and compute solutions. The Computational Brain focuses mainly on three domains: visual perception, learning and memory, and sensorimotor integration. Examples of recent computer models in these domains are discussed in detail, highlighting strengths and weaknesses, and extracting principles applicable to other domains. Churchland and Sejnowski show how both abstract models and neurobiologically realistic models can have useful roles in computational neuroscience, and they predict the coevolution of models and experiments at many levels of organization, from the neuron to the system. The Computational Brain addresses a broad audience: neuroscientists, computer scientists, cognitive scientists, and philosophers. It is written for both the expert and novice. A basic overview of neuroscience and computational theory is provided, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research. Technical terms are clearly explained in the text, and definitions are provided in an extensive glossary. The appendix contains a précis of neurobiological techniques. The Computational Brain is the first unified and broadly accessible book to bring together computational concepts and behavioral data within a neurobiological framework. Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field. Bradford Books imprint
Book
This book is organized into three parts that correspond with the main groups of chapters delivered during the Cajal Centenary Meeting on The Neutron Doctrine. These chapters represent important aspects of the morphology, development, and function of the cerebellum and related structures. Clearly an exhaustive analysis of all aspects of the cerebellar system, as they relate to the legacy of Ramon y Cajal, would be impossible to contain in just one volume, given its far-reaching impact. Instead, we deliberately steered away from the traditional handbook approach that some of us have taken in the past and selected those aspects of cerebellar research currently under vigorous study that would also represent the widest scope of interest for neuroscientists in general and for cerebellar specialists in particular. In particular, we felt that as the discrete anatomy of the cerebellum is quite well known, only certain aspects of the structure should be discussed here. For example, the organization of the pontocerebellar pathways, we felt, would be particularly interesting given the enormity of the system in higher vertebrates. Also of interest is the distribution and development of the synaptology and neurotransmitter properties in this cortex. Indeed, from the point of view of cerebellar development, this may represent one of the clearest paradigms in the understanding of rules for neurogenesis for the central nervous system.
Book
Dendrites form the major receiving part of neurons. It is within these highly complex, branching structures that the real work of the nervous system takes place. The dendrites of neurons receive thousands of synaptic inputs from other neurons. However, dendrites do more than simply collect and funnel these signals to the soma and axon; they shape and integrate the inputs in complex ways. Despite being discovered over a century ago, dendrites received little research attention until the early 1950s. Over the past few years there has been a dramatic explosion of interest in the function of these beautiful structures. Recent new research has developed our understanding of the properties of dendrites, and their role in neuronal function. The first edition of this book was a landmark in the literature, stimulating and guiding further research. The new edition substantially updates the earlier volume, and includes five new chapters. It gathers new information on dendrites into a single volume, with contributions written by leading researchers in the field. The book presents a survey of the current state of our knowledge of dendrites, from their morphology and development through to their electrical, chemical, and computational properties.
Article
Cells in area TE of the inferotemporal cortex of the monkey brain selectively respond to various moderately complex object features, and those that cluster in a columnar region that runs perpendicular to the cortical surface respond to similar features. Although cells within a column respond to similar features, their selectivity is not necessarily identical. The data of optical imaging in TE have suggested that the borders between neighboring columns are not discrete; a continuous mapping of complex feature space within a larger region contains several partially overlapped columns. This continuous mapping may be used for various computations, such as production of the image of the object at different viewing angles, illumination conditions. and articulation poses.
Chapter
Despite unprecedented success, modern neuroscience continues to face many cardinal issues in relation to the overall nature of brain function. Among such quandaries, that of the essentially intrinsic or extrinsic organization of nervous system activity must be considered fundamental. A general approach to this problem was proposed by Immanuel Kant (1781) in relation to cognition, which he deemed to be an innate or “a prioristic” property. The opposite approach was taken by William James (1890), who viewed cognition as extrinsic in nature.
Article
This chapter is about linking biophysics to computation. While the properties of dendritic voltage-gated currents in vertebrate neurons have recently received considerable (and renewed) attention, their potential functions for circuit integration and behavior remain largely speculative. This chapter summarizes results obtained from insect neurons. In such systems, the functional link between the particular biophysical properties of dendrites and the specific computations performed by them can often be made, providing an important explanatory bridge between different levels of analysis.
Article
This book presents a unified approach to understanding memory, attention, and decision-making. It shows how these fundamental functions for cognitive neuroscience can be understood in a common and unifying computational neuroscience framework. This framework links empirical research on brain function from neurophysiology, functional neuroimaging, and the effects of brain damage, to a description of how neural networks in the brain implement these functions using a set of common principles. The book describes the principles of operation of these networks, and how they could implement such important functions as memory, attention, and decision-making. The book discusses the hippocampus and memory, reward- and punishment-related learning, emotion and motivation, invariant visual object recognition learning, short-term memory, attention, biased competition, probabilistic decision-making, action selection, and decision-making.
Article
Book
Scitation is the online home of leading journals and conference proceedings from AIP Publishing and AIP Member Societies
Article
Linking neural microcircuit function to emergent properties of the mammalian brain requires fine-scale manipulation and measurement of neural activity during behavior, where each neuron's coding and dynamics can be characterized. We developed an optical method for simultaneous cellular-resolution stimulation and large-scale recording of neuronal activity in behaving mice. Dual-wavelength two-photon excitation allowed largely independent functional imaging with a green fluorescent calcium sensor (GCaMP3, λ = 920 ± 6 nm) and single-neuron photostimulation with a red-shifted optogenetic probe (C1V1, λ = 1,064 ± 6 nm) in neurons coexpressing the two proteins. We manipulated task-modulated activity in individual hippocampal CA1 place cells during spatial navigation in a virtual reality environment, mimicking natural place-field activity, or 'biasing', to reveal subthreshold dynamics. Notably, manipulating single place-cell activity also affected activity in small groups of other place cells that were active around the same time in the task, suggesting a functional role for local place cell interactions in shaping firing fields.
Article
Advances in experimental techniques, including behavioral paradigms using rich stimuli under closed loop conditions and the interfacing of neural systems with external inputs and outputs, reveal complex dynamics in the neural code and require a revisiting of standard concepts of representation. High-throughput recording and imaging methods along with the ability to observe and control neuronal subpopulations allow increasingly detailed access to the neural circuitry that subserves neural representations and the computations they support. How do we harness theory to build biologically grounded models of complex neural function?
Article
Brain function relies on communication between large populations of neurons across multiple brain areas, a full understanding of which would require knowledge of the time-varying activity of all neurons in the central nervous system. Here we use light-sheet microscopy to record activity, reported through the genetically encoded calcium indicator GCaMP5G, from the entire volume of the brain of the larval zebrafish in vivo at 0.8 Hz, capturing more than 80% of all neurons at single-cell resolution. Demonstrating how this technique can be used to reveal functionally defined circuits across the brain, we identify two populations of neurons with correlated activity patterns. One circuit consists of hindbrain neurons functionally coupled to spinal cord neuropil. The other consists of an anatomically symmetric population in the anterior hindbrain, with activity in the left and right halves oscillating in antiphase, on a timescale of 20 s, and coupled to equally slow oscillations in the inferior olive.
Article
The function of neocortical interneurons is still unclear, and, as often happens, one may be able to draw functional insights from considering the structure. In this spirit we describe recent structural results and discuss their potential functional implications. Most GABAergic interneurons innervate nearby pyramidal neurons very densely and without any apparent specificity, as if they were extending a 'blanket of inhibition', contacting pyramidal neurons often in an overlapping fashion. While subtypes of interneurons specifically target subcellular compartments of pyramidal cells, and they also target different layers selectively, they appear to treat all neighboring pyramidal cells the same and innervate them massively. We explore the functional implications and temporal properties of dense, overlapping inhibition by four interneuron populations.
Article
Monitoring representative fractions of neurons from multiple brain circuits in behaving animals is necessary for understanding neuronal computation. Here we describe a system that allows high channel count recordings from a small volume of neuronal tissue using a lightweight signal multiplexing head-stage that permits free behavior of small rodents. The system integrates multi-shank, high-density recording silicon probes, ultra-flexible interconnects and a miniaturized microdrive. These improvements allowed for simultaneous recordings of local field potentials and unit activity from hundreds of sites without confining free movements of the animal. The advantages of large-scale recordings are illustrated by determining the electro-anatomical boundaries of layers and regions in the hippocampus and neocortex and constructing a circuit diagram of functional connections among neurons in real anatomical space. These methods will allow the investigation of circuit operations and behavior-dependent inter-regional interactions for testing hypotheses of neural networks and brain function.
Article
Numerous experimental data suggest that simultaneously or sequentially activated assemblies of neurons play a key role in the storage and computational use of long-term memory in the brain. However, a model that elucidates how these memory traces could emerge through spike-timing-dependent plasticity (STDP) has been missing. We show here that stimulus-specific assemblies of neurons emerge automatically through STDP in a simple cortical microcircuit model. The model that we consider is a randomly connected network of well known microcircuit motifs: pyramidal cells with lateral inhibition. We show that the emergent assembly codes for repeatedly occurring spatiotemporal input patterns tend to fire in some loose, sequential manner that is reminiscent of experimentally observed stereotypical trajectories of network states. We also show that the emergent assembly codes add an important computational capability to standard models for online computations in cortical microcircuits: the capability to integrate information from long-term memory with information from novel spike inputs.