Article

Functional possibilities for synapses on dendrites and on dendritic spines

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... In the cerebral cortex, the majority of synapses take place in the neural dendritic trees and much of the information processing is realized by the dendrites as brain studies have revealed [2][3][4][5][6][7][8]. ...
... This is particularly true by considering that several brain researchers have proposed that dendrites (not the neuron) are the basic computing devices of the brain. Neurons together with its associated dendritic structure can work as multiple, almost independent, functional subunits where each subunit can implement different logical operations [3,4,[16][17][18][19]. The interested reader may peruse the works of some researchers [3][4][5][6][7][8]16,20,21], that have proposed possible biophysical mechanisms for dendritic computation of logical functions such as 'AND', 'NOT', 'OR', and 'XOR'. ...
... Neurons together with its associated dendritic structure can work as multiple, almost independent, functional subunits where each subunit can implement different logical operations [3,4,[16][17][18][19]. The interested reader may peruse the works of some researchers [3][4][5][6][7][8]16,20,21], that have proposed possible biophysical mechanisms for dendritic computation of logical functions such as 'AND', 'NOT', 'OR', and 'XOR'. ...
Article
Full-text available
This paper presents a novel lattice based biomimetic neural network trained by means of a similarity measure derived from a lattice positive valuation. For a wide class of pattern recognition problems, the proposed artificial neural network, implemented as a dendritic hetero-associative memory delivers high percentages of successful classification. The memory is a feedforward dendritic network whose arithmetical operations are based on lattice algebra and can be applied to real multivalued inputs. In this approach, the realization of recognition tasks, shows the inherent capability of prototype-class pattern associations in a fast and straightforward manner without need of any iterative scheme subject to issues about convergence. Using an artificially designed data set we show how the proposed trained neural net classifies a test input pattern. Application to a few typical real-world data sets illustrate the overall network classification performance using different training and testing sample subsets generated randomly.
... This is a consequence of the fact that the bulk of dendritic modeling studies to date have dealt with vertebrate neurons. Finally, specific discussion of the computational significance of dendritic spines is left to a number of excellent reports and reviews available elsewhere (Koch and Poggio 1983;Miller et al. 1985;Perkel and Perkel 1985;Rall and Segev 1987;Segev and Rall 1988;Koch and Poggio 1987;Shepherd and Greer 1988;Zador et al. 1990;Koch et al. 1992). ...
... In this vein, the fourth idea we consider here is that nonlinear membrane mechanisms; if appropriately deployed in a dendritic tree, can allow the single neuron to act as a powerful multilayer computational network. The most common instantiation of this idea has been the proposal that individual neurons may implement a hierarchy of logical operations within their dendritic trees, consisting of AND, OR, AND-NOT, and XOR operations (Lorente de N6 and Condouris 1959;Llinas and Nicholson 1971;Poggio and Torre 1977;Koch et al. 1982Koch et al. , 1983Shepherd et al. 1985Shepherd et al. , 1989Rall and Segev 1987;Shepherd and Brayton 1987;Zador et al. 1992;see Fig. 19). Representation of a Boolean network within a dendritic tree; the emphasis of this latter study was on the second-order multiplicative interaction between excitatory and shunting inhibitory synapses underlying the so-called AND-NOT operation (Koch etal. ...
... In an attempt to explore the complexity of interactions among many synaptic inputs in a dendritic tree, Rall and Segev (1987) tested the inputoutput behavior of dendritic branches containing passive and excitable dendritic spines (Fig. 22). Several synaptic input conditions are shown at left, where black spines are excitable and white spines are passive. ...
Article
Full-text available
This review considers the input-output behavior of neurons with dendritic trees, with an emphasis on questions of information processing. The parts of this review are (1) a brief history of ideas about dendritic trees, (2) a review of the complex electrophysiology of dendritic neurons, (3) an overview of conceptual tools used in dendritic modeling studies, including the cable equation and compartmental modeling techniques, and (4) a review of modeling studies that have addressed various issues relevant to dendritic information processing.
... By combining such hyperbolic polynomials, which are commutative and associative binary operations, a model dendritic node may have more than two input variables. Prior results on structures and computation of dendritic trees can be found in Koch andPoggio (1982, 1992), Koch, Poggio, and Torre (1983), Rall and Sergev (1987), Shepherd and Brayton (1987), Mel (1992aMel ( , 1992bMel ( , 1993Mel ( , 1994Mel ( , 2008. ...
... Dendrites use more than 60% of the energy consumed by the brain (Wong, 1989), occupy more than 99% of the surface of some neurons (Fox & Barnard, 1957) and are the largest component of neural tissue in volume Sirevaag & Greenough (1987). It was discovered in the 1980s and 1990s that dendrites are capable of performing information processing tasks (Koch & Poggio, 1982, 1992Koch et al., 1983;Rall & Sergev, 1987;Shepherd & Brayton, 1987;Mel, 1992aMel, , 1992bMel, , 1993Mel, , 1994Poirazi, Brannon, & Mel, 2003;Mel, 2008). Johnson (2001, 2002), found dendritic inhibition to enhance neural coding properties and unsupervised learning in neural networks. ...
... This implies that a biological dendritic encoder has a reasonably small number m of inputs and a large number of inputs must be processed by multiple dendritic encoders, each inputting a small subset of these inputs. These dendritic encoders are believed to be called compartments or subunits in the literature (Koch & Poggio, 1982, 1992Koch et al., 1983;Rall & Sergev, 1987;Shepherd & Brayton, 1987;Mel, 1992aMel, , 1992bMel, , 1993Mel, , 1994Mel, , 2008Poirazi et al., 2003). ...
Article
Full-text available
A biologically plausible low-order model (LOM) of biological neural networks is proposed. LOM is a recurrent hierarchical network of models of dendritic nodes and trees; spiking and nonspiking neurons; unsupervised, supervised covariance and accumulative learning mechanisms; feedback connections; and a scheme for maximal generalization. These component models are motivated and necessitated by making LOM learn and retrieve easily without differentiation, optimization, or iteration, and cluster, detect, and recognize multiple and hierarchical corrupted, distorted, and occluded temporal and spatial patterns. Four models of dendritic nodes are given that are all described as a hyperbolic polynomial that acts like an exclusive-OR logic gate when the model dendritic nodes input two binary digits. A model dendritic encoder that is a network of model dendritic nodes encodes its inputs such that the resultant codes have an orthogonality property. Such codes are stored in synapses by unsupervised covariance learning, supervised covariance learning, or unsupervised accumulative learning, depending on the type of postsynaptic neuron. A masking matrix for a dendritic tree, whose upper part comprises model dendritic encoders, enables maximal generalization on corrupted, distorted, and occluded data. It is a mathematical organization and idealization of dendritic trees with overlapped and nested input vectors. A model nonspiking neuron transmits inhibitory graded signals to modulate its neighboring model spiking neurons. Model spiking neurons evaluate the subjective probability distribution (SPD) of the labels of the inputs to model dendritic encoders and generate spike trains with such SPDs as firing rates. Feedback connections from the same or higher layers with different numbers of unit-delay devices reflect different signal traveling times, enabling LOM to fully utilize temporally and spatially associated information. Biological plausibility of the component models is discussed. Numerical examples are given to demonstrate how LOM operates in retrieving, generalizing, and unsupervised and supervised learning.
... Finally, Hebb's postulate makes no distinction between synapses at different locations in the dendritic arbor, but treats them all equivalently. Today, we know that the location of a synapse in the dendritic tree has important implications for plasticity, because cable filtering and active conductances in the dendritic arbor shape electrical signals as they propagate to and from the synapse (Rall and Shepherd, 1968;Rall and Segev, 1987) (see Chapters 14 and 15). As we detail in this chapter, dendrites thus considerably increase the computational repertoire of neurons. ...
... At the very beginning of the twentieth century the Ramón y Cajal (1904) suggested that spines may serve as electrical compartments, an idea that was later elaborated by several other investigators (Chang, 1952;Shepherd et al., 1985;Rall and Segev, 1987;Segev and Rall, 1988). The advent of advanced imaging techniques has allowed researchers to directly test this idea (Yuste, 2013). ...
... a reduction in the resistance of the narrow stem by which spines are attached to the parent dendrite . . . may change the relative weight of synapses and thus contribute to the observed potentiation of excitatory transmission (Rall, 1970, as cited in Bliss andLomo, 1973;Rall and Segev, 1987;Diamond et al., 1970, as cited in Rall andSegev, 1987). Soon thereafter, Van Harreveld and Fifkova tested this hypothesis by using pioneering rapid freezing methods of tissue preparation for EM following in vivo stimulation protocols described in the first LTP papers. ...
... a reduction in the resistance of the narrow stem by which spines are attached to the parent dendrite . . . may change the relative weight of synapses and thus contribute to the observed potentiation of excitatory transmission (Rall, 1970, as cited in Bliss andLomo, 1973;Rall and Segev, 1987;Diamond et al., 1970, as cited in Rall andSegev, 1987). Soon thereafter, Van Harreveld and Fifkova tested this hypothesis by using pioneering rapid freezing methods of tissue preparation for EM following in vivo stimulation protocols described in the first LTP papers. ...
Chapter
Neuroscientists have long been fascinated by the problem of how memory is formed, stored, and recalled. Cellular models of learning and memory propose that morphological remodeling of existing synapses and the formation of new ones lead to enduring changes in neuronal networks. We discuss the evidence of activity-dependent structural plasticity of dendritic spines in the hippocampus, focusing on the effects of environmental enrichment, behavioral learning, and in vitro manipulations of neuronal activity leading to synaptic plasticity. We review the actions of neurotrophins and hormones on dendritic spines in the context of hippocampal-dependent learning and memory. Finally, we discuss the consequences of abnormal dendritic spines for cognition and learning and memory in disorders associated with mental retardation.
... Some spines also have smooth endoplasmic reticulum (SER), while others are filled with polyribosomes [1]. Speculations regarding spine function have focused on the peculiar spine neck, which may serve to restrict diffusional exchange of signaling molecules between the spine head and parent dendrite [3,4] or to impede synaptic currents [5]. Spines show activity-and experience-dependent morphological plasticity in vitro [6,7] and in vivo [8], consistent with a role in memory storage in the mammalian brain [9]. ...
... But even without influencing the sizes of synaptic currents, spine neck resistances could still produce compartmentalization of membrane potential: synaptic currents passing through the spine neck produce a voltage difference between the spine head and dendrite that might selectively Ca 2+ signaling in dendritic spines Bernardo L Sabatini, Miguel Maravall and Karel Svoboda* activate voltage-sensitive conductances in the spine head [5,23]. These differences are proportional to spine neck resistances and the amplitudes of synaptic currents. ...
Article
Dendritic spines are cellular microcompartments that are isolated from their parent dendrites and neighboring spines. Recently, imaging studies of spine Ca2+ dynamics have revealed that Ca2+ can enter spines through voltage-sensitive and ligand-activated channels, as well as through Ca2+ release from intracellular stores. Relationships between spine Ca2+ signals and induction of various forms of synaptic plasticity are beginning to be elucidated. Measurements of spine Ca2+ concentration are also being used to probe the properties of single synapses and even individual calcium channels in their native environment.
... The versatile nature of synaptic computations can promote the intrinsic computational abilities of neurons to perform linear and nonlinear Boolean logic operations. Although since the 1980s many modeling studies have proposed that biophysical neuron models can act as the linearly nonseparable XOR-like logic gates (Koch, Poggio, & Torre, 1982;Rall & Segev, 1987;Shepherd & Brayton, 1987;Zador, Clairborne, & Brown, 1991), the traditional belief is that while a single neuron is capable of basic AND/OR operations, XOR gate requires neural circuits composed of multiple neuron layers and summing junctions (Minsky & Papert, 1969;Fromherz & Gaede, 1993). However, accumulating experimental evidence suggests the possibility that biological neurons can perform such nonlinear logic operations through nonlinear responses between synaptic inputs and neuronal outputs. ...
Article
Full-text available
Information processing in artificial neural networks is largely dependent on the nature of neuron models. While commonly used models are designed for linear integration of synaptic inputs, accumulating experimental evidence suggests that biological neurons are capable of nonlinear computations for many converging synaptic inputs via homo- and heterosynaptic mechanisms. This nonlinear neuronal computation may play an important role in complex information processing at the neural circuit level. Here we characterize the dynamics and coding properties of neuron models on synaptic transmissions delivered from two hidden states. The neuronal information processing is influenced by the cooperative and competitive interactions among synapses and the coherence of the hidden states. Furthermore, we demonstrate that neuronal information processing under two-input synaptic transmission can be mapped to linearly nonseparable XOR as well as basic AND/OR operations. In particular, the mixtures of linear and nonlinear neuron models outperform the fashion-MNIST test compared to the neural networks consisting of only one type. This study provides a computational framework for assessing information processing of neuron and synapse models that may be beneficial for the design of brain-inspired artificial intelligence algorithms and neuromorphic systems.
... where U H and V H (mV) denote the membrane potentials in the horizontal cell spine head and network, respectively. The spine stem is modeled, as in previous studies, as a lumped Ohmic resistor, neglecting the stem's membrane and cable properties (Miller et al., 1985;Rall and Segev, 1987;Segev and Rall, 1988;Baer and Rinzel, 1991). The equation for the membrane potential in the sheet V H (x,y,t) is a current balance relation for the capacitive, gap junction, stem, and ionic currents given by ...
Article
Full-text available
The retina is a part of the central nervous system that is accessible, well documented, and studied by researchers spanning the clinical, experimental, and theoretical sciences. Here, we mathematically model the subcircuits of the outer plexiform layer of the retina on two spatial scales: that of an individual synapse and that of the scale of the receptive field (hundreds to thousands of synapses). To this end we formulate a continuum spine model (a partial differential equation system) that incorporates the horizontal cell syncytium and its numerous processes (spines) within cone pedicles. With this multiscale modeling approach, detailed biophysical mechanisms at the synaptic level are retained while scaling up to the receptive field level. As an example of its utility, the model is applied to study background-induced flicker enhancement in which the onset of a dim background enhances the center flicker response of horizontal cells. Simulation results, in comparison with flicker enhancement data for square, slit, and disk test regions, suggest that feedback mechanisms that are voltage-axis modulators of cone calcium channels (for example, ephaptic and/or pH feedback) are robust in capturing the temporal dynamics of background-induced flicker enhancement. The value and potential of this continuum spine approach is that it provides a framework for mathematically modeling the input-output properties of the entire receptive field of the outer retina while implementing the latest models for transmission mechanisms at the synaptic level.
... Differences in the size and number of branches in the dendritic arbors of cortical pyramidal neurons affect the total number of spines contained within, reflecting putative differences in the number of excitatory inputs received by individual cells Rosa, 1997, 1998;Elston et al., 1999a,b;Elston, 2000). Varying spine densities reported on the basal dendrites may also affect electrical and biochemical compartmentalization, cooperativity between inputs, and shunting inhibition (Koch et al., 1982;Shepherd et al., 1985;Rall and Segev, 1987;Shepherd and Brayton, 1987;Koch and Zador, 1993;Mainen, 1999). In addition, differences in the total length of, number of branches in, and diameters of the dendrites determine the cable properties (Rall, 1959), the degree of nonlinear compartmentalization (Rall, 1964;Koch et al., 1982), and the propagation of potentials (Stuart and Häusser, 1994;Spruston et al., 1995;Markram et al., 1997;Vetter et al., 2001) within the arbor (for review, see Rall et al., 1992;Stuart et al., 1997;Koch, 1999;Mel, 1999;Spruston et al., 1999;Häusser et al., 2000). ...
... Let us include now the synaptic activity of the rest of the synaptic contacts. Although the particular location of each synaptic contact in the dendritic arbor affects the way EPSC's are integrated [Rall and Segev, 1987, Segev and Rall, 1998, Schutter, 1999, Magee, 2000], we will neglect the spatial dimension by assuming that the EPSC's from each synapse add linearly at the soma. Therefore, the total synaptic current coming from N contacts takes the simple form ...
... In spite of many uncertainties, currently available experimental data, combined with insights gained from theoretical and modeling studies, seem to support the hypothesis that dendritic trees are functionally compartmentalized, and could consist of a moderately large number of independent integrative "subunits", perhaps up to 100. The idea that dendritic trees are functionally compartmentalized has been discussed in a variety of forms over the years (Koch et al., 1982;Shepherd, Brayton, Miller, Segev, Rinzel, & Rall, 1985;Rall & Segev, 1987;Mel, 1992bMel, , 1992aMel, , 1993Mel, Ruderman, & Archie, 1998;Archie & Mel, 2000;Poirazi & Mel, GES ). In the continued absence of empirical evidence sufficient to confirm or deny this "nonlinear subunit" hypothesis, several recent modeling studies have provided support for one simple version of the idea. ...
Article
Neurons are the building blocks of the brain. This article reviews the basic components and biophysical mechanisms that underlie nerve cell function, including a discussion of ion channels, the membrane potential, electrical signaling, synaptic function, and synaptic integration. The implications of these electrophysiological phenomena are then examined in relation to theories of information processing in single neurons.
... Experiments have also shown that synaptic inputs from nearby sources are non-linearly summed (Koch et al., 1983;Tuckwell, 1986;Schwindt and Crill, 1995;Polsky et al., 2004), whereas inputs from distant dendritic branches are linearly summed (Poirazi et al., 2003a,b;Gasparini et al., 2004;Polsky et al., 2004;Gasparini and Magee, 2006;Losonczy and Magee, 2006;Silver, 2010). The possibility of obtaining non-linear integration as a function of synapse co-localization in dendrites allows pyramidal neurons to multiply incoming signals at the dendritic stage before summing them at the somatic stage (Koch et al., 1983;Rall and Segev, 1987;Shepherd and Brayton, 1987;Mel, 1992Mel, , 1993Mel, , 2008Sidiropoulou et al., 2006;Cazé et al., 2013). Such -neurons compute weighted products in addition to weighted sums, extending their range of computational operations (Durbin and Rumelhart, 1989;Poirazi and Mel, 2001;Poirazi et al., 2003a,b;Polsky et al., 2004). ...
Article
Full-text available
Selecting responses in working memory while processing combinations of stimuli depends strongly on their relations stored in long-term memory. However, the learning of XOR-like combinations of stimuli and responses according to complex rules raises the issue of the non-linear separability of the responses within the space of stimuli. One proposed solution is to add neurons that perform a stage of non-linear processing between the stimuli and responses, at the cost of increasing the network size. Based on the non-linear integration of synaptic inputs within dendritic compartments, we propose here an inter-synaptic (IS) learning algorithm that determines the probability of potentiating/depressing each synapse as a function of the co-activity of the other synapses within the same dendrite. The IS learning is effective with random connectivity and without either a priori wiring or additional neurons. Our results show that IS learning generates efficacy values that are sufficient for the processing of XOR-like combinations, on the basis of the sole correlational structure of the stimuli and responses. We analyze the types of dendrites involved in terms of the number of synapses from pre-synaptic neurons coding for the stimuli and responses. The synaptic efficacy values obtained show that different dendrites specialize in the detection of different combinations of stimuli. The resulting behavior of the cortical network model is analyzed as a function of inter-synaptic vs. Hebbian learning. Combinatorial priming effects show that the retrospective activity of neurons coding for the stimuli trigger XOR-like combination-selective prospective activity of neurons coding for the expected response. The synergistic effects of inter-synaptic learning and of mixed-coding neurons are simulated. The results show that, although each mechanism is sufficient by itself, their combined effects improve the performance of the network.
... Shepherd and Brayton [100] demonstrated that voltagedependent Na þ channels could implement logical and and or operations between nearby synapses, and proposed that a dendritic tree might act like a hierarchical boolean logic network [12]. Rall and Segev [98] showed that mixtures of active and passive spines could produce complicated and varied nonlinear interactions between inputs to multiple branches of a dendritic tree. Zador et al. [106] showed that a voltage-dependent K þ channels could produce a xor interaction between inputs delivered to two different dendritic sites. ...
Article
Full-text available
In pursuit of the goal to understand and eventually reproduce the diverse functions of the brain, a key challenge lies in reverse engineering the peculiar biology-based “technology” that underlies the brain's remarkable ability to process and store information. The basic building block of the nervous system is the nerve cell, or “neuron,” yet after more than 100 years of neurophysiological study and 60 years of modeling, the information processing functions of individual neurons, and the parameters that allow them to engage in so many different types of computation (sensory, motor, mnemonic, executive, etc.) remain poorly understood. In this paper, we review both historical and recent findings that have led to our current understanding of the analog spatial processing capabilities of dendrites, the major input structures of neurons, with a focus on the principal cell type of the neocortex and hippocampus, the pyramidal neuron (PN). We encapsulate our current understanding of PN dendritic integration in an abstract layered model whose spatially sensitive branch-subunits compute multidimensional sigmoidal functions. Unlike the 1-D sigmoids found in conventional neural network models, multidimensional sigmoids allow the cell to implement a rich spectrum of nonlinear modulation effects directly within their dendritic trees.
... If this group of synapses is activated consistently and for a long time at a rate that differs from the average rate of the rest of the synapses, the G syn,i values of this group of synapses will stand out relative to those of the other synapses that remain practically unaltered. This stems from the partial electrical decoupling of activity generated at side branches and the main trunk (Rall and Segev, 1987). In other words, synapses on side branches are much more sensitive to localized persistent changes in presynaptic activity than are synapses on the main trunk, even under conditions of intensive presynaptic activity. ...
... Both ideas emphasize the coupling of individual synapses to the cell body and the uniformity and linearity thereof. A second view holds that the dendrites exist to create a number of independent functional compartments within which various kinds of non-linear computations can be carried out (Koch et al., 1983; Rall and Segev, 1987; Shepherd and Brayton, 1987; Mel, 1992a,b). Our results here support a particular version of this hypothesis in which the long, thin, unbranched, synapse-rich terminal dendrites may themselves act like classical neuron-like summing units, each with its own quasi-independent subunit nonlinearity . ...
Article
Full-text available
Understanding the principles of organization of the cerebral cortex requires insight into its evolutionary history. This has traditionally been the province of anatomists, but evidence regarding the microcircuit organization of different cortical areas is providing new approaches to this problem. Here we use the microcircuit concept to focus first on the principles of microcircuit organization of three-layer cortex in the olfactory cortex, hippocampus, and turtle general cortex, and compare it with six-layer neocortex. From this perspective it is possible to identify basic circuit elements for recurrent excitation and lateral inhibition that are common across all the cortical regions. Special properties of the apical dendrites of pyramidal cells are reviewed that reflect the specific adaptations that characterize the functional operations in the different regions. These principles of microcircuit function provide a new approach to understanding the expanded functional capabilities elaborated by the evolution of the neocortex.
Article
In certain biologically relevant computing scenarios, a neuron “pools” the outputs of multiple independent functional subunits, firing if any one of them crosses threshold. Recent studies suggest that active dendrites could provide the thresholding mechanism, so that both the thresholding and pooling operations could take place within a single neuron. A pooling neuron faces a difficult task, however. Dendrites can produce highly variable responses depending on the density and spatial patterning of their synaptic inputs, and bona fide dendritic firing may be very rare, making it difficult for a neuron to reliably detect when one of its many dendrites has “gone suprathreshold”. Our goal has been to identify biological adaptations that optimize a neuron's performance at the binary subunit pooling (BSP) task. Katz et al. (2009) pointed to the importance of spine density gradients in shaping dendritic responses. In a similar vein, we used a compartmental model to study how a neuron’s performance at the BSP task is affected by different spine density layouts and other biological variables. We found BSP performance was optimized when dendrites have (1) a decreasing spine density gradient (true for many types of pyramidal neurons); (2) low-to-medium resistance spine necks; (3) strong NMDA currents; (4) fast spiking Na+ channels; and (5) powerful hyperpolarizing inhibition. Our findings provide a normative account that links several neuronal properties within the context of a behaviorally relevant task, and thus provide new insights into nature's subtle strategies for optimizing the computing capabilities of neural tissue.
Article
Full-text available
In order to record the stream of autobiographical information that defines our unique personal history, our brains must form durable memories from single brief exposures to the patterned stimuli that impinge on them continuously throughout life. However, little is known about the computational strategies or neural mechanisms that underlie the brain's ability to perform this type of "online" learning. Based on increasing evidence that dendrites act as both signaling and learning units in the brain, we developed an analytical model that relates online recognition memory capacity to roughly a dozen dendritic, network, pattern, and task-related parameters. We used the model to determine what dendrite size maximizes storage capacity under varying assumptions about pattern density and noise level. We show that over a several-fold range of both of these parameters, and over multiple orders-of-magnitude of memory size, capacity is maximized when dendrites contain a few hundred synapses—roughly the natural number found in memory-related areas of the brain. Thus, in comparison to entire neurons, dendrites increase storage capacity by providing a larger number of better-sized learning units. Our model provides the first normative theory that explains how dendrites increase the brain’s capacity for online learning; predicts which combinations of parameter settings we should expect to find in the brain under normal operating conditions; leads to novel interpretations of an array of existing experimental results; and provides a tool for understanding which changes associated with neurological disorders, aging, or stress are most likely to produce memory deficits—knowledge that could eventually help in the design of improved clinical treatments for memory loss.
Chapter
This talk briefly reviewed results of earlier computations that simulated synaptic inputs to passive dendritic membrane. Then it reviewed dendritic spines and the possibility that some spine heads might have excitable membrane properties. Because some neurons have dendritic arbors that are studded with hundreds or thousands of spines, computations were used to explore theoretical examples of different synaptic input combinations to both excitable and passive spines. The results demonstrate the possibility of synaptic amplification by action potentials in spine heads, and the possibility of chain reactions between excitable spines. These nonlinear responses could be used for logical processing of inputs in dendritic arbors. Dendro-dendritic synapses were also reviewed briefly; they provide the possibility of graded synaptic interactions between dendritic arbors of different neurons, (without the constraint of all-or-nothing impulses). These examples of modeling and computations based on anatomy and physiology demonstrate both a richness of possibilities and a challenge to future modeling of neurons and networks of neurons. The details below are quite brief; more complete presentations with illustrations are available in the references cited.
Chapter
In the vertebrate nervous system, communication between distant neurons is done using encoded pulse streams. Closely packed neurons may not produce impulses at all, relying instead on electrotonic spread of membrane potential differences to communicate (e.g. McCormick, 1990). Impulses are also used within the dendritic trees of some, or perhaps all, spatially extensive neurons (e.g. Llinas and Sugimori, 1980). However, their role is not well understood, partly because experimemeasurements are extremely difficult to obtain within thin dendritic branches.
Conference Paper
This paper presents an overview of the current status of lattice based dendritic computing. Roughly speaking, lattice based dendritic computing refers to a biomimetic approach to artificial neural networks whose computational aspects are based on lattice group operations. We begin our presentation by discussing some important processes of biological neurons followed by a biomimetic model which implements these processes. We discuss the reasons and rationale behind this approach and illustrate the methodology with some examples. Global activities in this field as well as some potential research issues are also part of this discussion.
Chapter
The adhesion between neighboring neurons is sometimes reinforced by adherent junctions. These junctions are characterized, with few exceptions, by the presence of symmetrical plaques of dense material applied to the cytoplasmic face of the two apposed plasma membranes and of fine filaments converging upon these plaques. Along the midline of the intercellular space, which at the junction is slightly wider than elsewhere, a thin dense line parallel to the two apposed plasma membranes can sometimes be seen. Interneuronal adherent junctions are generally very small (Fig. 5.16) and are hence referred to as puncta adhaerentia. At these junctions, cell cohesion is maintained by adhesion molecules belonging mainly to the cadherin family.
Chapter
These computations focus on dendritic spines and the possibility that some spine heads possess excitable nerve membrane. It will be shown that such spines can provide synaptic amplification and, what is more important, that interactions between such spines can result in a local chain reaction involving clusters of such spines. Whether a cluster fires depends (with nonlinear sensitivity) on changes in synaptic excitation and inhibition and on changes in spine stem resistance (a possible locus for plasticity related to conditioning and/or learning) or other spine parameters. Several computed examples indicate the rich repertoire of logical operations that could be implemented by excitable spine clusters.
Chapter
Cells in the primary visual cortex operate on the signals coming from the retina and analyze different attributes of the visual scene. To do this, cortical cells have a number of specialized receptive field properties that allow them to respond selectively to the orientation, shape, color or movement of the visual stimulus. Because cortical neurons have more complex functional features than their input cells, many of these properties must be generated within the cortex. Therefore, knowledge of intrinsic cortical connections is a basic prerequisite for understanding the neural mechanisms by which the visual cortex analyses sensory information.
Chapter
Neural and glial cells are the fundamental cellular components of brain tissue. When compared to cells which form the majority of the body tissues, neural cells appear as characterized by their shapes, which exhibit a large variety of branched geometries (Fig. 1). Such a branched architecture is observed in the whole nervous tissue, but shape specificities can be noticed from one brain region to another. Nevertheless, among this diversity, similar geometries are found in the same region of different brains of the same species or in the same region of various brains selected, as example, in the vertebrate phylogenetic scale. These observations suggest that neural branched geometry is certainly related, in part, to the expression of genetic factors which are preserved during phylogenesis.
Chapter
Full-text available
In a recent series of investigations, dramatic differences in pyramidal cell structure have been demonstrated in the primate neocortex. Initial studies in the cortex of the macaque monkey revealed a remarkable degree of heterogeneity in the structure of pyramidal cells sampled from homologous cortical layers among functionally related cortical areas. For example, there was, on average, a 13-fold difference in the number of spines in the dendritic trees of layer III pyramidal cells involved in visual processing. Moreover, the differences in the complexity of pyramidal cell structure among cortical areas were not random: cells involved in what are generally accepted to be more complex aspects of visual processing had more complex structure. Similar systematic trends in pyramidal cell specialization were found among cortical areas involved in somatosensation, locomotion, emotion, and even in executive cortical functions such as conceptualization, planning, and prioritizing. These studies revealed up to a 16-fold difference in the number of dendritic spines (putative excitatory inputs) in the dendritic trees of pyramidal cells in different cortical areas of the adult macaque monkey brain. Moreover, there are systematic and dramatic differences in the branching structure of pyramidal cells in the macaque cerebral cortex, with neurons in executive cortical areas having significantly more branches than those in sensory or motor cortex. Comparative studies of pyramidal cell structure have revealed some interesting similarities, and some striking differences, in the structure of pyramidal cells in the brains of various primate species. With up to a 30-fold difference in the estimates of the total number of dendritic spines in the dendritic trees of pyramidal cells in the primate cerebral cortex there is considerable scope for receiving different numbers of excitatory inputs. Moreover, the most branched pyramidal cells observed to date are found in the human prefrontal cortex, being, on average, more than twice as branched as those in the galago prefrontal cortex. Thus, species differences in pyramidal cell structure not only influence the number of inputs received within the dendritic tree, but how these inputs are integrated. Here we present an overview of the comparative data obtained from visual, somatosensory, motor, cingulate, and prefrontal cortex of the human, baboon, macaque monkey, vervet monkey, owl monkey, marmoset monkey, and galago. From these data, and those obtained from the archontan tree shrew, one can speculate on the functional consequences of specialization of the pyramidal cell phenotype, and how these may vary between species. New insights into evolutionary and developmental pressures that may shape the pyramidal cell phenotype in the adult brain are presented.
Conference Paper
This chapter discusses canonical neurons and their computational organization using neurons in the olfactory pathway as models for analysis. Several types of programs for neural modeling, including ASTAP (IBM), SPICE, SABER (ANALOGY), GENESIS, and NEURON are used which have several advantages over being limited to one approach. The advantages include, enabling basing line to any application of a particular program to a new model; second, as documentation for the public domain programs is often limited and support sometimes unavailable, it ensures continuity in pursuing a problem by the well-documented commercial programs. The chapter also presents the concept of a canonical neuron, and several examples in a model system and the olfactory pathway are considered, such as olfactory receptor neuron, mitralltufted cell, and olfactory granule cell. Cortical pyramidal neurons compute complicated logic functions because of nonlinear interactions within their dendritic trees, and that incorporation of these properties into network nodes will confer more realistic and much enhanced computational capacity onto those networks.
Conference Paper
This chapter discusses the model of analog and digital processing in single nerve cells that comprises the network. This approach requires the construction of detailed models that incorporate knowledge of neuronal morphology and physiology as well as synaptic architecture. A morphologically and physiologically characterized dendritic tree of a cerebellar Purkinje cell (PC), as well as for a reconstructed axon from the cat somatosensory cortex is demonstrated. The reconstruction of PC cell was performed utilizing a computer-driven system, neuron tracing system (NTS), and it does not include the dendritic spines that account for more than 50% of the total dendritic area of these cells. The study also highlights several general principles which include the single nerve cell can function as a network of almost independent subunits and voltage attenuation resulting from dendritic structure implies that the site of the synaptic input is functionally significant.
Conference Paper
We present a two layer dendritic auto-associative memory with high rates of perfect recall of exemplar grayscale images distorted by different transformations or corrupted by random noise. The memory is a feedforward network based on dendritic computing employing lattice algebraic operations and is capable of dealing with real valued inputs. A major consequence of this approach is the direct and fast association of perfect or imperfect input patterns with stored associated patterns without any convergence problems.
Conference Paper
Recent discoveries in neuroscience imply that the basic computational elements are the dendrites that make up more than 50% of a cortical neuron's membrane. Neuroscientists now believe that the basic computation units are dendrites, capable of computing simple logic functions. This paper discusses two types of neural networks that take advantage of these new discoveries. The focus of this paper is on some learning algorithms in the two neural networks. Learning is in terms of lattice computations that take place in the dendritic structure as well as in the cell body of the neurons used in this model.
Conference Paper
We present a two layer dendritic hetero-associative memory that gives high percentages of correct classification for typical pattern recognition problems. The memory is a feedforward dendritic network based on lattice algebra operations and can be used with multivalued real inputs. A major consequence of this approach shows the inherent capability of prototype-class pattern associations to realize classification tasks in a direct and fast way without any convergence problems.
Article
Full-text available
The cable model of electrical conduction in neurons is central to our understanding of information processing in neurons. The conduction of action potentials in axons has been modeled as a nonlinear excitable cable (Hodgkin and Huxley, 1952), and the integration of postsynaptic signals in dendrites has been studied with analytic solutions to passive cables (Rall, 1977). Recently, several groups have examined the possibility of more complex signal processing in dendrites with complex morphologies and excitable membranes by numerical integration of the cable equations (Shepherd et al., 1985; Koch et al., 1983; Rall and Segev, 1985; Perkel and Perkel, 1985).
Article
The back-propagation of axonal spikes into the soma and dendrites, where they might interfere with synaptic integration is good evidence for that. In addition, the elaboration of the output signal of the neuron may involve all the morphological regions of the subunits of the neuron: somatodendritic integration of synaptic inputs through linear and nonlinear processes, generation and patterning of action potentials trains in the axo-somatic region, selective distribution of these action potentials to the post-synaptic neurons in the axonal arborization. The behavior of neurons may change depending on their working environment—that is, on the physiological state of the animal and the task performed because of variations of the average synaptic input or neuromodulation. In this chapter, many conceptual advances on the operation of the neuron made in the past have relied on mathematically simple models, not on detailed models of neurons trying to incorporate all the geometric and electric complexity of neurons revealed by experiments.
Article
Non-inactivating sodium channels, NI-NaC, were previously shown by computer simulations to be capable of conferring diverse types of electrical behavior on space-clamped excitable cells. In the present study it has been found that an even richer repertoire of types of behavior may occur in NI-NaC containing excitable cells which are not space-clamped. Thus, under appropriate conditions one may observe various types of traveling action potentials in a nerve process which is electrically bistable due to the presence of NI-NaC in its membrane. An action potential may occur in an axon which is in the polarized resting state leaving behind it the axon in the depolarized resting state, or vice versa. Under a different set of conditions an inverse action potential may occur in which the axon starts out, and ends up, in the depolarized resting state, and the action potential consists of a regenerative polarization. Minute quantities of NI-NaC which have slow deactivation kinetics may cause the generation of trains of repetitive action potentials without repetitive stimulation. Non-uniform steady-state distributions of membrane potentials may occur along a nerve process if the distribution of NI-NaC is not longitudinally uniform. Several cases were modeled in some detail. An electrically bistable septum which contains NI-NaC situated in the midst of an axon of the Hodgkin & Huxley type may be alternatingly switched from the polarized to the depolarized state, and vice versa, by brief injections of positive or negative currents at the location of the septum. Under appropriate conditions the depolarized septum blocks the traffic of action potentials along the axon, whereas the polarized septum does not interfere with the regular conduction of action potentials. Electrically bistable septa thus have the potential of serving as reversible switches, under the control of excitatory or inhibitory synaptic inputs, switches which govern the traffic of action potentials along nerve processes.
Article
Full-text available
We review recent work concerning the effects of dendritic structure on single neuron response and the dynamics of neural populations. We highlight a number of concepts and techniques from physics useful in studying the behaviour of the spatially extended neuron. First we show how the single neuron Green's function, which incorporates de-tails concerning the geometry of the dendritic tree, can be determined using the theory of random walks. We then exploit the formal analogy between a neuron with dendritic structure and the tight–binding model of excitations on a disordered lattice to analyse various Dyson–like equations arising from the modelling of synaptic inputs and random synaptic background activity. Finally, we formulate the dynamics of interacting pop-ulations of spatially extended neurons in terms of a set of Volterra integro–differential equations whose kernels are the single neuron Green's functions. Linear stability analysis and bifurcation theory are then used to investigate two particular aspects of population dynamics (i) pattern formation in a strongly coupled network of analog neurons and (ii) phase–synchronization in a weakly coupled network of integrate–and–fire neurons.
Article
The distribution of thalamocortical (TC) and other synapses involving spiny stellate neurons in layer IV of the barrel region of mouse primary somatosensory cortex (SmI) was examined in seven male CD/1 mice. TC axon terminals were labeled by lesion-induced degeneration, which has been shown to label reliably all TC synapses in mouse barrel cortex. Spiny stellate neurons, labeled by Golgi impregnation and gold toning, were identified with the light microscope prior to thin sectioning and electron microscopy. Analysis of eight dendritic segments from seven spiny stellate neurons showed that most of their synapses are with their dendritic spines, rather than with their shafts. Axospinous synapses are primarily of the asymmetrical type, whereas axodendritic synapses are mainly of the symmetrical type. Dendrites of spiny stellate neurons consistently form thalamocortical synapses, most of which involve spine heads rather than spine stalks or dendritic shafts. From 10.4% to 22.9% of all asymmetrical synapses with dendrites of spiny stellate neurons involve TC axon terminals. In general, this is a higher range than the ranges that characterize the TC synaptic connectivity of dendrites belonging to other types of neurons, implying that spiny stellate neurons are perhaps more strongly influenced by TC synaptic Input than other types of cortical neurons examined previously. Spines involved in TC synapses were distributed irregularly along each of the stellate cell dendrites; about half of the interspinous intervals between these spines were about 5 μm or less. Modulations of the efficacy of TC synaptic input to dendrites of layer IV spiny stellate neurons are discussed in the light of recently reported computer simulated analyses of axospinous synaptic connections.
Article
The combined light and electron microscopic analysis of Golgi-impregnated neural tissue is a potent tool for determining the connectivity of neural networks within the brain. In the experimental paradigms commonly applied in these studies, the Golgi-impregnated neurons are typically examined as the postsynaptic neuronal components. The structural characteristics and the pattern of distribution of their synaptic connections with other groups of identified neurons are analyzed. Due to the high power of resolution of the Golgi-electron microscopic technique, the ultrastructural analysis of Golgi-impregnated neurons can be expanded to elucidate activity-dependent structural alterations in their cytoarchitecture. These structural alterations can then be correlated under different physiological conditions with changes in the functional efficacy of the subcellular neuronal components. © 1992 Wiley-Liss, Inc.
Chapter
Summary. Recent discoveries in neuroscience imply that the basic computational elements are the dendrites that make up more than 50% of a cortical neuron’s membrane. Neuroscientists now believe that the basic computation units are dendrites, capable of computing simple logic functions. This paper discusses two types of neural networks that take advantage of these new discoveries. The focus of this paper is on some learning algorithms in the two neural networks. Learning is in terms of lattice computations that take place in the dendritic structure as well as in the cell body of the neurons used in this model.
Article
Given that the mind is the brain, as materialists insist, those who would understand the mind must understand the brain. Assuming that arrays of neural firing frequencies are highly salient aspects of brain information processing (the vector functional account), four hurdles to an understanding of the brain are identified and inspected: indeterminacy, micro-specificity, chaos, and openness.
Article
Full-text available
The Nernst-Planck equation for electrodiffusion was applied to axons, dendrites and spines. For thick processes (1 m) the results of computer simulation agreed accurately with the cable model for passive conduction and for propagating action potentials. For thin processes (0.1 m) and spines, however, the cable model may fail during transient events such as synaptic potentials. First, ionic concentrations can rapidly change in small compartments, altering ionic equilibrium potentials and the driving forces for movement of ions across the membrane. Second, longitudinal diffusion may dominate over electrical forces when ionic concentration gradients become large. We compare predictions of the cable model and the electro-diffusion model for excitatory postsynaptic potentials on spines and show that there are significant discrepancies for large conductance changes. The electro-diffusion model also predicts that inhibition on small structures such as spines and thin processes is ineffective. We suggest a modified cable model that gives better agreement with the electro-diffusion model.
Article
Neurons in the central nervous system of mammals and many other species receive most of their synaptic inputs in their dendritic branches and spines, but the precise manner in which this information is processed in the dendrites is not understood. In order to gain insight into these mechanisms, simulations of interactions between distal dendritic spines with an excitable membrane have been carried out, using an electrical circuit analysis program for the compartmental representation of a dendrite and several spines. Interactions between responses to single and paired excitatory and inhibitory synaptic inputs have been analyzed. Basic logic operations, including AND gates, OR gates and AND-NOT gates, arise from these interactions.The results suggest the computational power and precision of excitable spines in distal branches of neuronal dendrites, especially those of pyramidal neurons in the cerebral cortex. The applicability to information processing in distal dendrites is discussed.
Article
In this paper we examine models of neural coding in the central nervous system at both the cellular and multi-cellular level. The proliferation of neural network models proposed for modeling cognitive and mnemonic capabilities of brains or brain regions suggests the need for neurobiologists to directly test these models. In this paper, we examine assumptions in light of physiological and anatomical constraints of real neurons. We advocate the interaction of neurobiology and modeling efforts to cause these models to evolve.
Conference Paper
This paper presents a novel, three-stage, auto-associative memory based on lattice algebra. The first two stages of this memory consist of correlation matrix memories within the lattice domain. The third and final stage is a two-layer feed-forward network based on dendritic computing. The output nodes of this feed-forward network yield the desired pattern vector association. The computations performed by each stage are all lattice based and, thus, provide for fast computation and avoidance of convergence problems. Additionally, the proposed model is extremely robust in the presence of noise. Bounds of allowable noise that guarantees perfect output are also discussed.
Conference Paper
Computation in a neuron of a traditional neural network is accomplished by summing the products of neural values and connection weights of all the neurons in the network connected to it. The new state of the neuron is then obtained by an activation function which sets the state to either zero or one, depending on the computed value. We provide an alternative way of computation in an artificial neuron based on lattice algebra and dendritic computation. The neurons of the proposed model bear a close resemblance to the morphology of biological neurons and mimic some of their behavior. The computational and pattern recognition capabilities of this model are explored by means of illustrative examples and detailed discussion.
Article
A learning machine, called a clustering interpreting probabilistic associative memory (CIPAM), is proposed. CIPAM consists of a clusterer and an interpreter. The clusterer is a recurrent hierarchical neural network of unsupervised processing units (UPUs). The interpreter is a number of supervised processing units (SPUs) that branch out from the clusterer. Each processing unit (PU), UPU or SPU, comprises “dendritic encoders” for encoding inputs to the PU, “synapses” for storing resultant codes, a “nonspiking neuron” for generating inhibitory graded signals to modulate neighboring spiking neurons, “spiking neurons” for computing the subjective probability distribution (SPD) or the membership function, in the sense of fuzzy logic, of the label of said inputs to the PU and generating spike trains with the SPD or membership function as the firing rates, and a masking matrix for maximizing generalization. While UPUs employ unsupervised covariance learning mechanisms, SPUs employ supervised ones. They both also have unsupervised accumulation learning mechanisms. The clusterer of CIPAM clusters temporal and spatial data. The interpreter interprets the resultant clusters, effecting detection and recognition of temporal and hierarchical causes.
ResearchGate has not been able to resolve any references for this publication.