Article

The motivation, principles, and state of neuromorphic engineering

Authors:
  • SUNY College of Nanoscale Science & Engineering
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

Advances in integrated circuitry from the 1950s to the present day have enabled a revolution in technology across the world. However, fundamental limits of circuitry make further improvements through historically successful methods increasingly challenging. It is becoming clear that to address new challenges and applications, new methods of computation will be required. One promising field is neuromorphic engineering, a broad field which applies biologically inspired principles to create alternative computational architectures and methods. In this work, we address why neuromorphic engineering is one of the most promising fields within emerging computational technology, detail its common principles and models, and summarize its current state and future challenges.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... However, for neuromorphic applications it can be advantageous, as the processes that underpin biological synaptic and neuronal behavior are not strictly deterministic. 25 Figure 2(a) plots the conductance (using a read bias of þ5 V) as a function of the cumulative number of successively applied 50 ms voltage pulses (1000 in a positive sense followed by 1000 in a negative sense). Clearly, the conductance levels progressively and asymptotically increase, showing plasticity, as the number of positive voltage pulses increases, and then asymptotically decrease with the cumulative number of negative pulses. ...
Article
The electrical conductivity of lithium niobate thin film capacitor structures depends on the density of conducting 180[Formula: see text] domain walls, which traverse the interelectrode gap, and on their inclination angle with respect to the polarization axis. Both microstructural characteristics can be altered by applying electric fields, but changes are time-dependent and relax, upon field removal, into a diverse range of remanent states. As a result, the measured conductance is a complex history-dependent function of electric field and time. Here, we show that complexity in the kinetics of microstructural change, in this ferroelectric system, can generate transport behavior that is strongly reminiscent of that seen in key neurological building blocks, such as synapses. Successive voltage pulses, of positive and negative polarity, progressively enhance or suppress domain wall related conductance (analogous to synaptic potentiation and depression), in a way that depends on both the pulse voltage magnitude and frequency. Synaptic spike-rate-dependent plasticity and even Ebbinghaus forgetting behavior, characteristic of learning and memory in the brain, can be emulated as a result. Conductance can also be changed according to the time difference between designed identical voltage pulse waveforms, applied to top and bottom contact electrodes, in a way that can mimic both Hebbian and anti-Hebbian spike-timing-dependent plasticity in synapses. While such features have been seen in, and developed for, other kinds of memristors, few have previously been realized through the manipulation of conducting ferroelectric domain walls.
... However, as the need to model larger connectomes and longer time series is growing rapidly, newer methods are being developed [32][33][34][35][36]. Other driving forces Fig. 4 Rheobase, the minimum threshold of a nerve fiber under sustained stimulation, and chronaxie, the time constant of the fiber standardized at twice the rheobase stimulus strength, together determine the fiber threshold under stimuli of any length in passive fiber modeling. ...
Chapter
Full-text available
We have truly entered the Age of the Connectome due to a confluence of advanced imaging tools, methods such as the flavors of functional connectivity analysis and inter-species connectivity comparisons, and computational power to simulate neural circuitry. The interest in connectomes is reflected in the exponentially rising number of articles on the subject. What are our goals? What are the “functional requirements” of connectome modelers? We give a perspective on these questions from our group whose focus is modeling neurological disorders, such as neuropathic back pain, epilepsy, Parkinson’s disease, and age-related cognitive decline, and treating them with neuromodulation.
Article
Full-text available
The high performance requirements of nowadays computer networks are limiting their ability to support important requirements of the future. Two important properties essential in assuring cost-efficient computer networks and supporting new challenging network scenarios are operating energy efficient and supporting cognitive computational models. These requirements are hard to fulfill without challenging the current architecture behind network packet processing elements such as routers and switches. Notably, these are currently dominated by the use of traditional transistor-based components. In this article, we contribute with an in-depth analysis of alternative architectural design decisions to improve the energy footprint and computational capabilities of future network packet processors by shifting from transistor-based components to a novel component named Memristor . A memristor is a computational component characterized by non-volatile operations on a physical state, mostly represented in form of (electrical) resistance. Its state can be read or altered by input signals, e.g. electrical pulses, where the future state always depends on the past state. Unlike in traditional von Neumann architectures, the principles behind memristors impose that memory operations and computations are inherently colocated. In combination with the non-volatility, this allows to build memristors at nanoscale size and significantly reduce the energy consumption. At the same time, memristors appear to be highly suitable to model cognitive functionality due to the state dependence transitions in the memristor. In cognitive architectures, our survey contributes to the study of memristor-based Ternary Content Addressable Memory (TCAM) used for storage of cognitive rules inside packet processors. Moreover, we analyze the memristor-based novel cognitive computational architectures built upon self-learning capabilities by harnessing from non-volatility and state-based response of memristors (including reconfigurable architectures, reservoir computation architectures, neural network architectures and neuromorphic computing architectures).
Article
Full-text available
Spiking encoded stochastic neural network is believed to be energy efficient and biologically plausible and an increasing effort has been made recently to translate its great cognitive power into hardware implementations. Here, a stacked indium–gallium–zinc–oxide (IGZO)‐based threshold switching memristor with essential properties as a spiking stochastic neuron is introduced. Such IGZO spiking stochastic neuron shows a sigmoid firing probability that can be tuned by the amplitude, width, and frequency of the applied pulse sequence. More importantly, the stacked configuration is experimentally demonstrated with eliminated switching variation compared to one single memristor and a narrow relative deviation (≤6.8%) of the firing probability can be achieved. The IGZO stochastic neuron is applied to perform probabilistic unsupervised learning for handwritten digit reconstruction based on a restricted Boltzmann machine and a recognition accuracy of 91.2% can be achieved. Such IGZO stochastic neuron with reproducible firing probability emulates probabilistic computing in the brain, which is of significant importance to hardware implementation of spiking neural network to analyze sensory stimuli, produce adequate motor control, and make reasonable inference. The stochastic neuron device based on stacked indium–gallium–zinc–oxide‐based threshold switching memristor shows a sigmoid firing probability that can be tuned by the parameters of the applied pulse sequence. This stochastic neuron shows a narrow relative deviation (≤6.8%) of the firing probability and is applied for handwritten digit recognition task with an accuracy of 91.2%.
Article
Full-text available
Author summary How can we capture the incredible complexity of brain circuits in quantitative models, and what can such models teach us about mechanisms underlying brain activity? To answer these questions, we set out to build extensive, bio-realistic models of brain circuitry by employing systematic datasets on brain structure and function. Here we report the first modeling results of this project, focusing on the layer 4 of the primary visual cortex (V1) of the mouse. Our simulations reproduced a variety of experimental observations in response to a large battery of visual stimuli. The results elucidated circuit mechanisms determining patters of neuronal activity in layer 4 –in particular, the roles of feedforward thalamic inputs and specific patterns of intracortical connectivity in producing tuning of neuronal responses to the orientation of motion. Simplification of neuronal models led to specific deficiencies in reproducing experimental data, giving insights into how biological details contribute to various aspects of brain activity. To enable future development of more sophisticated models, we make the software code, the model, and simulation results publicly available.
Article
Full-text available
Recently, integrated optics has gained interest as a hardware platform for implementing machine learning algorithms. Of particular interest are artificial neural networks, since matrix-vector multiplications, which are used heavily in artificial neural networks, can be done efficiently in photonic circuits. The training of an artificial neural network is a crucial step in its application. However, currently on the integrated photonics platform there is no efficient protocol for the training of these networks. In this work, we introduce a method that enables highly efficient, in situ training of a photonic neural network. We use adjoint variable methods to derive the photonic analogue of the backpropagation algorithm, which is the standard method for computing gradients of conventional neural networks. We further show how these gradients may be obtained exactly by performing intensity measurements within the device. As an application, we demonstrate the training of a numerically simulated photonic artificial neural network. Beyond the training of photonic machine learning implementations, our method may also be of broad interest to experimental sensitivity analysis of photonic systems and the optimization of reconfigurable optics platforms.
Article
Full-text available
Loihi is a 60 mm2 chip fabricated in Intels 14nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and most importantly programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation outperforming all known conventional solutions.
Article
Full-text available
Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history. We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications. We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled. The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed.
Article
Full-text available
During the last ten years, superconducting circuits have passed from being interesting physical devices to becoming contenders for near-future useful and scalable quantum information processing (QIP). Advanced quantum simulation experiments have been shown with up to nine qubits, while a demonstration of Quantum Supremacy with fifty qubits is anticipated in just a few years. Quantum Supremacy means that the quantum system can no longer be simulated by the most powerful classical supercomputers. Integrated classical-quantum computing systems are already emerging that can be used for software development and experimentation, even via web interfaces. Therefore, the time is ripe for describing some of the recent development of superconducting devices, systems and applications. As such, the discussion of superconducting qubits and circuits is limited to devices that are proven useful for current or near future applications. Consequently, the centre of interest is the practical applications of QIP, such as computation and simulation in Physics and Chemistry.
Article
Full-text available
Classical Hebbian learning puts the emphasis on joint pre- and postsynaptic activity, but neglects the potential role of neuromodulators. Since neuromodulators convey information about novelty or reward, the influence of neuromodulatorson synaptic plasticity is useful not just for action learning in classical conditioning, but also to decide 'when' to create new memories in response to a flow of sensory stimuli. In this review, we focus on timing requirements for pre- and postsynaptic activity in conjunction with one or several phasic neuromodulatory signals. While the emphasis of the text is on conceptual models and mathematical theories, we also discuss some experimental evidence for neuromodulation of Spike-Timing-Dependent Plasticity. We highlight the importance of synaptic mechanisms in bridging the temporal gap between sensory stimulation and neuromodulatory signals, and develop a framework for a class of neo-Hebbian three-factor learning rules that depend on presynaptic activity, postsynaptic variables as well as the influence of neuromodulators.
Article
Full-text available
A recent publication provides the network graph for a neocortical microcircuit comprising 8 million connections between 31,000 neurons (H. Markram, et al., Reconstruction and simulation of neocortical microcircuitry, Cell, 163 (2015) no. 2, 456-492). Since traditional graph-theoretical methods may not be sufficient to understand the immense complexity of such a biological network, we explored whether methods from algebraic topology could provide a new perspective on its structural and functional organization. Structural topological analysis revealed that directed graphs representing connectivity among neurons in the microcircuit deviated significantly from different varieties of randomized graph. In particular, the directed graphs contained in the order of $10^7$ simplices {\DH} groups of neurons with all-to-all directed connectivity. Some of these simplices contained up to 8 neurons, making them the most extreme neuronal clustering motif ever reported. Functional topological analysis of simulated neuronal activity in the microcircuit revealed novel spatio-temporal metrics that provide an effective classification of functional responses to qualitatively different stimuli. This study represents the first algebraic topological analysis of structural connectomics and connectomics-based spatio-temporal activity in a biologically realistic neural microcircuit. The methods used in the study show promise for more general applications in network science.
Article
Full-text available
The complexity of the brain and the protean nature of behavior remain the most elusive area of science, but also the most important. This book contains chapters written by twenty-three experts from many areas-from evolution to qualia-of systems neuroscience to formulate one problem each and discuss. Although each chapter was written independently and can be read separately, together they provide a roadmap to the field of systems neuroscience. This book provides as a source of inspirations for future explorers of the brain.
Article
Full-text available
Neocortical neurons have thousands of excitatory synapses. It is a mystery how neurons integrate the input from so many synapses and what kind of large-scale network behavior this enables. It has been previously proposed that non-linear properties of dendrites enable neurons to recognize multiple patterns. In this paper we extend this idea by showing that a neuron with several thousand synapses arranged along active dendrites can learn to accurately and robustly recognize hundreds of unique patterns of cellular activity, even in the presence of large amounts of noise and pattern variation. We then propose a neuron model where some of the patterns recognized by a neuron lead to action potentials and define the classic receptive field of the neuron, whereas the majority of the patterns recognized by a neuron act as predictions by slightly depolarizing the neuron without immediately generating an action potential. We then present a network model based on neurons with these properties and show that the network learns a robust model of time-based sequences. Given the similarity of excitatory neurons throughout the neocortex and the importance of sequence memory in inference and behavior, we propose that this form of sequence memory is a universal property of neocortical tissue. We further propose that cellular layers in the neocortex implement variations of the same sequence memory algorithm to achieve different aspects of inference and behavior. The neuron and network models we introduce are robust over a wide range of parameters as long as the network uses a sparse distributed code of cellular activations. The sequence capacity of the network scales linearly with the number of synapses on each neuron. Thus neurons need thousands of synapses to learn the many temporal patterns in sensory stimuli and motor sequences.
Article
Full-text available
The hippocampal system is critical for storage and retrieval of declarative memories, including memories for locations and events that take place at those locations. Spatial memories place high demands on capacity. Memories must be distinct to be recalled without interference and encoding must be fast. Recent studies have indicated that hippocampal networks allow for fast storage of large quantities of uncorrelated spatial information. The aim of the this article is to review and discuss some of this work, taking as a starting point the discovery of multiple functionally specialized cell types of the hippocampal-entorhinal circuit, such as place, grid, and border cells. We will show that grid cells provide the hippocampus with a metric, as well as a putative mechanism for decorrelation of representations, that the formation of environment-specific place maps depends on mechanisms for long-term plasticity in the hippocampus, and that long-term spatiotemporal memory storage may depend on offline consolidation processes related to sharp-wave ripple activity in the hippocampus. The multitude of representations generated through interactions between a variety of functionally specialized cell types in the entorhinal-hippocampal circuit may be at the heart of the mechanism for declarative memory formation. Copyright © 2015 Cold Spring Harbor Laboratory Press; all rights reserved.
Article
Full-text available
23 experts from the many areas of systems neuroscience were invited to each write a chapter that formulates one problem. Together these 23 problems provide a challenging roadmap to the field of systems neuroscience and will serve as a source of inspiration for future brain explorers. How have brains evolved? 1. "Shall we even understand the fly's brain?" Gilles Laurent I hope to illustrate two main things: the first is that small systems, and particularly small olfactory systems, seem to use mechanisms and strategies that are not unique to them. The second is that small systems are not at all that "simple"; this reinforces my view that we may be better off starting with the modest goal of understanding flies first. 2. "Can we understand the action of brain in natural environments?" Hermann Wagner We work mainly on reduced systems, but evolution has shaped brains in a different way. To really understand brain function we have to analyze it in the same environment in which brains evolved. 3. Hemisphere dominance of brain function -which functions are lateralized and why? Gunther Ehret There are two main perspectives, a) an evolutionary one asking for common origins and advantages of hemisphere specializations of vertebrate, mainly mammalian, brains, and b) a proximate one asking for genetic and physiological mechanisms responsible for the realization of hemisphere specializations. How is the cerebral cortex organized? 4. "What is the function of the thalamus?" S. Murray Sherman The thalamus had long been thought to perform a boring, machine-like relay of information to cortex, but recent evidence suggests that it dynamically gates information flow and controls the nature of what cortex receives in a state-dependent manner. Furthermore, many areas of thalamus seem to perform a "higher-order" relay from one cortical area to another, and indeed this trans-thalamic route may be critical for much, perhaps all, cortico-cortical communication.
Article
Full-text available
The spiking neural network architecture (SpiNNaker) project aims to deliver a massively parallel million-core computer whose interconnect architecture is inspired by the connectivity characteristics of the mammalian brain, and which is suited to the modeling of large-scale spiking neural networks in biological real time. Specifically, the interconnect allows the transmission of a very large number of very small data packets, each conveying explicitly the source, and implicitly the time, of a single neural action potential or “spike.” In this paper, we review the current state of the project, which has already delivered systems with up to 2500 processors, and present the real-time event-driven programming model that supports flexible access to the resources of the machine and has enabled its use by a wide range of collaborators around the world.
Conference Paper
Full-text available
This demonstration is based on the wafer-scale neuromophic system presented in the previous papers by Schemmel et. al. (20120), Scholze et. al. (2011) and Millner et. al. (2010). The demonstration setup will allow the visitors to monitor and partially manipulate the neural events at every level. They will get an insight into the complex interplay between packet-based and realtime communication necessary to combine continuous-time mixed-signal neural networks with a packet-based transport network. Several network experiments implemented on the setup will be accessible for user interaction.
Article
Full-text available
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
Article
Full-text available
In this paper, the recent progress of synaptic electronics is reviewed. The basics of biological synaptic plasticity and learning are described. The material properties and electrical switching characteristics of a variety of synaptic devices are discussed, with a focus on the use of synaptic devices for neuromorphic or brain-inspired computing. Performance metrics desirable for large-scale implementations of synaptic devices are illustrated. A review of recent work on targeted computing applications with synaptic devices is presented.
Article
Full-text available
Dendrites are the main recipients of synaptic inputs and are important sites that determine neurons' input-output functions. This review focuses on thin neocortical dendrites, which receive the vast majority of synaptic inputs in cortex but also have specialized electrogenic properties. We present a simplified working-model biophysical scheme of pyramidal neurons that attempts to capture the essence of their dendritic function, including the ability to behave under plausible conditions as dynamic computational subunits. We emphasize the electrogenic capabilities of NMDA receptors (NMDARs) because these transmitter-gated channels seem to provide the major nonlinear depolarizing drive in thin dendrites, even allowing full-blown NMDA spikes. We show how apparent discrepancies in experimental findings can be reconciled and discuss the current status of dendritic spikes in vivo; a dominant NMDAR contribution would indicate that the input-output relations of thin dendrites are dynamically set by network activity and cannot be fully predicted by purely reductionist approaches.
Article
Full-text available
Reference brains are indispensable tools in human brain mapping, enabling integration of multimodal data into an anatomically realistic standard space. Available reference brains, however, are restricted to the macroscopic scale and do not provide information on the functionally important microscopic dimension. We created an ultrahigh-resolution three-dimensional (3D) model of a human brain at nearly cellular resolution of 20 micrometers, based on the reconstruction of 7404 histological sections. “BigBrain” is a free, publicly available tool that provides considerable neuroanatomical insight into the human brain, thereby allowing the extraction of microscopic data for modeling and simulation. BigBrain enables testing of hypotheses on optimal path lengths between interconnected cortical regions or on spatial organization of genetic patterning, redefining the traditional neuroanatomy maps such as those of Brodmann and von Economo.
Article
Full-text available
With STDP, a neuron embedded in a neuronal network can determine which neighboring neurons are worth listening to by potentiating those inputs that predict its own spiking activity. However, the neuron in question pays less attention to those neighboring neurons that fail to do this. In other words, the neuron pays less attention to neighbors speaking gibberish. The net result is that our sample neuron can integrate inputs with predictive power and transform this is into a meaningful predictive output, even though the meaning itself is not strictly known by the neuron. In STDP we thus have a very simple and elegant algorithm for appropriately hooking up neurons in the brain. Little wonder that there has been so much excitement surrounding the discovery of STDP.
Article
Full-text available
Dendritic spines arise as small protrusions from the dendritic shaft of various types of neuron and receive inputs from excitatory axons. Ever since dendritic spines were first described in the nineteenth century, questions about their function have spawned many hypotheses. In this review, we introduce understanding of the structural and biochemical properties of dendritic spines with emphasis on components studied with imaging methods. We then explore advances in in vivo imaging methods that are allowing spine activity to be studied in living tissue, from super-resolution techniques to calcium imaging. Finally, we review studies on spine structure and function in vivo. These new results shed light on the development, integration properties and plasticity of spines.
Article
Full-text available
Author Summary Experimental data from neuroscience have provided substantial knowledge about the intricate structure of cortical microcircuits, but their functional role, i.e. the computational calculus that they employ in order to interpret ambiguous stimuli, produce predictions, and derive movement plans has remained largely unknown. Earlier assumptions that these circuits implement a logic-like calculus have run into problems, because logical inference has turned out to be inadequate to solve inference problems in the real world which often exhibits substantial degrees of uncertainty. In this article we propose an alternative theoretical framework for examining the functional role of precisely structured motifs of cortical microcircuits and dendritic computations in complex neurons, based on probabilistic inference through sampling. We show that these structural details endow cortical columns and areas with the capability to represent complex knowledge about their environment in the form of higher order dependencies among salient variables. We show that it also enables them to use this knowledge for probabilistic inference that is capable to deal with uncertainty in stored knowledge and current observations. We demonstrate in computer simulations that the precisely structured neuronal microcircuits enable networks of spiking neurons to solve through their inherent stochastic dynamics a variety of complex probabilistic inference tasks.
Article
Full-text available
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.
Article
Full-text available
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin-Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.
Article
Full-text available
Neuronal activity is mediated through changes in the probability of stochastic transitions between open and closed states of ion channels. While differences in morphology define neuronal cell types and may underlie neurological disorders, very little is known about influences of stochastic ion channel gating in neurons with complex morphology. We introduce and validate new computational tools that enable efficient generation and simulation of models containing stochastic ion channels distributed across dendritic and axonal membranes. Comparison of five morphologically distinct neuronal cell types reveals that when all simulated neurons contain identical densities of stochastic ion channels, the amplitude of stochastic membrane potential fluctuations differs between cell types and depends on sub-cellular location. For typical neurons, the amplitude of membrane potential fluctuations depends on channel kinetics as well as open probability. Using a detailed model of a hippocampal CA1 pyramidal neuron, we show that when intrinsic ion channels gate stochastically, the probability of initiation of dendritic or somatic spikes by dendritic synaptic input varies continuously between zero and one, whereas when ion channels gate deterministically, the probability is either zero or one. At physiological firing rates, stochastic gating of dendritic ion channels almost completely accounts for probabilistic somatic and dendritic spikes generated by the fully stochastic model. These results suggest that the consequences of stochastic ion channel gating differ globally between neuronal cell-types and locally between neuronal compartments. Whereas dendritic neurons are often assumed to behave deterministically, our simulations suggest that a direct consequence of stochastic gating of intrinsic ion channels is that spike output may instead be a probabilistic function of patterns of synaptic input to dendrites.
Article
Full-text available
Ramón y Cajal's studies in the field of neuroscience provoked a radical change in the course of its history. For this reason he is considered as the father of modern neuroscience. Some of his original preparations are housed at the Cajal Museum (Cajal Institute, CSIC, Madrid, Spain). In this article, we catalogue and analyse more than 4,500 of Cajal's histological preparations, the same preparations he used during his scientific career. Furthermore, we catalogued Cajal's original correspondence, both manuscripts and personal letters, drawings and plates. This is the first time anyone has compiled an account of Cajal's enormous scientific production, offering some curious insights into his work and his legacy.
Article
Full-text available
In the present overview, our wish is to demystify some aspects of coding with spike-timing, through a simple review of well-understood technical facts regarding spike coding. Our goal is a better understanding of the extent to which computing and modeling with spiking neuron networks might be biologically plausible and computationally efficient. We intentionally restrict ourselves to a deterministic implementation of spiking neuron networks and we consider that the dynamics of a network is defined by a non-stochastic mapping. By staying in this rather simple framework, we are able to propose results, formula and concrete numerical values, on several topics: (i) general time constraints, (ii) links between continuous signals and spike trains, (iii) spiking neuron networks parameter adjustment. Beside an argued review of several facts and issues about neural coding by spikes, we propose new results, such as a numerical evaluation of the most critical temporal variables that schedule the progress of realistic spike trains. When implementing spiking neuron networks, for biological simulation or computational purpose, it is important to take into account the indisputable facts here unfolded. This precaution could prevent one from implementing mechanisms that would be meaningless relative to obvious time constraints, or from artificially introducing spikes when continuous calculations would be sufficient and more simple. It is also pointed out that implementing a large-scale spiking neuron network is finally a simple task.
Article
Decades of research on the neural code underlying spatial navigation have revealed a diverse set of neural response properties. The Entorhinal Cortex (EC) of the mammalian brain contains a rich set of spatial correlates, including grid cells which encode space using tessellating patterns. However, the mechanisms and functional significance of these spatial representations remain largely mysterious. As a new way to understand these neural representations, we trained recurrent neural networks (RNNs) to perform navigation tasks in 2D arenas based on velocity inputs. Surprisingly, we find that grid-like spatial response patterns emerge in trained networks, along with units that exhibit other spatial correlates, including border cells and band-like cells. All these different functional types of neurons have been observed experimentally. The order of the emergence of grid-like and border cells is also consistent with observations from developmental studies. Together, our results suggest that grid cells, border cells and others as observed in EC may be a natural solution for representing space efficiently given the predominant recurrent connections in the neural circuits.
Conference Paper
Multipliers are the most space and power-hungry arithmetic operators of the digital implementation of deep neural networks. We train a set of state-of-the-art neural networks (Maxout networks) on three benchmark datasets: MNIST, CIFAR-10 and SVHN. They are trained with three distinct formats: floating point, fixed point and dynamic fixed point. For each of those datasets and for each of those formats, we assess the impact of the precision of the multiplications on the final error after training. We find that very low precision is sufficient not just for running trained networks but also for training them. For example, it is possible to train Maxout networks with 10 bits multiplications.
Article
“Computing performance doubles every couple of years” is the popular re-phrasing of Moore’s Law, which describes the 500,000-fold increase in the number of transistors on modern computer chips. But what impact has this 50-year expansion of the technological frontier of computing had on the productivity of firms?This paper focuses on the surprise change in chip design in the mid-2000s, when Moore’s Law faltered. No longer could it provide ever-faster processors, but instead it provided multicore ones with stagnant speeds. Using the asymmetric impacts from the changeover to multicore, this paper shows that firms that were ill-suited to this change because of their software usage were much less advantaged by later improvements from Moore’s Law. Each standard deviation in this mismatch between firm software and multicore chips cost them 0.5-0.7pp in yearly total factor productivity growth. These losses are permanent, and without adaptation would reflect a lower long-term growth rate for these firms. These findings may help explain larger observed declines in the productivity growth of users of information technology.
Article
Since the proposal of a fast learning algorithm for deep belief networks in 2006, the deep learning techniques have drawn ever-increasing research interests because of their inherent capability of overcoming the drawback of traditional algorithms dependent on hand-designed features. Deep learning approaches have also been found to be suitable for big data analysis with successful applications to computer vision, pattern recognition, speech recognition, natural language processing, and recommendation systems. In this paper, we discuss some widely-used deep learning architectures and their practical applications. An up-to-date overview is provided on four deep learning architectures, namely, autoencoder, convolutional neural network, deep belief network, and restricted Boltzmann machine. Different types of deep neural networks are surveyed and recent progresses are summarized. Applications of deep learning techniques on some selected areas (speech recognition, pattern recognition and computer vision) are highlighted. A list of future research topics are finally given with clear justifications.
Article
We report first observations of an integrated analog photonic network, in which connections are configured by microring weight banks, as well as the first use of electro-optic modulators as photonic neurons. A mathematical isomorphism between the silicon photonic circuit and a continuous neural model is demonstrated through dynamical bifurcation analysis. Exploiting this isomorphism, existing neural engineering tools can be adapted to silicon photonic information processing systems. A 49-node silicon photonic neural network programmed using a "neural compiler" is simulated and predicted to outperform a conventional approach 1,960-fold in a toy differential system emulation task. Photonic neural networks leveraging silicon photonic platforms could access new regimes of ultrafast information processing for radio, control, and scientific computing.
Article
Biological in formation-processing systems operate on completely different principles from those with which most engineers are familiar. For many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those we have been able to implement using digital methods. This advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals, rather than by the absolute values of digital signals. This approach requires adaptive techniques to mitigate the effects of component differences. This kind of adaptation leads naturally to systems that learn about their environment. Large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power. For this reason, adaptive analog technology can be expected to utilize the full potential of wafer-scale silicon fabrication.
Article
The digital reconstruction of a slice of rat somatosensory cortex from the Blue Brain Project provides the most complete simulation of a piece of excitable brain matter to date. To place these efforts in context and highlight their strengths and limitations, we introduce a Biological Imitation Game, based on Alan Turing's Imitation Game, that operationalizes the difference between real and simulated brains.
Article
The new era of cognitive computing brings forth the grand challenge of developing systems capable of processing massive amounts of noisy multisensory data. This type of intelligent computing poses a set of constraints, including real-time operation, low-power consumption and scalability, which require a radical departure from conventional system design. Brain-inspired architectures offer tremendous promise in this area. To this end, we developed TrueNorth, a 65 mW real-time neurosynaptic processor that implements a non-von Neumann, low-power, highly-parallel, scalable, and defect-tolerant architecture. With 4096 neurosynaptic cores, the TrueNorth chip contains 1 million digital neurons and 256 million synapses tightly interconnected by an event-driven routing infrastructure. The fully digital 5.4 billion transistor implementation leverages existing CMOS scaling trends, while ensuring one-to-one correspondence between hardware and software. With such aggressive design metrics and the TrueNorth architecture breaking path with prevailing architectures, it is clear that conventional computer-aided design (CAD) tools could not be used for the design. As a result, we developed a novel design methodology that includes mixed asynchronous–synchronous circuits and a complete tool flow for building an event-driven, low-power neurosynaptic chip. The TrueNorth chip is fully configurable in terms of connectivity and neural parameters to allow custom configurations for a wide range of cognitive and sensory perception applications. To reduce the system’s communication energy, we have adapted existing application-agnostic very large-scale integration CAD placement tools for mapping logical neural networks to the physical neurosynaptic core locations on the TrueNorth chips. With that, we have successfully demonstrated the use of TrueNorth-based systems in multiple applications, including visual object recognition, with higher performance and orders of magnitude lower- power consumption than the same algorithms run on von Neumann architectures. The TrueNorth chip and its tool flow serve as building blocks for future cognitive systems, and give designers an opportunity to develop novel brain-inspired architectures and systems based on the knowledge obtained from this paper.
Article
The recent development of power-efficient neuromorphic hardware offers great opportunities for applications where power consumption is a main concern, ranging from mobile platforms to server farms. However, it remains a challenging task to design spiking neural networks (SNN) to do pattern recognition on such hardware. We present a SNN for digit recognition which relies on mechanisms commonly used on neuromorphic hardware, i.e. exponential synapses with spiketiming- dependent plasticity, lateral inhibition, and an adaptive threshold. Unlike most other approaches, we do not present any class labels to the network; the network uses unsupervised learning. The performance of our network scales well with the number of neurons used. Intuitively, the used algorithm is comparable to k-means and competitive learning algorithms such as vector quantization and self-organizing maps, each neuron learns a representation of a part of the input space, similar to a centroid in k-means. Our architecture achieves 95% accuracy on the MNIST benchmark, which outperforms other unsupervised learning methods for SNNs. The fact that we used no domainspecific knowledge points toward a more general applicability of the network design.
Article
For over a century, the neuron doctrine - which states that the neuron is the structural and functional unit of the nervous system - has provided a conceptual foundation for neuroscience. This viewpoint reflects its origins in a time when the use of single-neuron anatomical and physiological techniques was prominent. However, newer multineuronal recording methods have revealed that ensembles of neurons, rather than individual cells, can form physiological units and generate emergent functional properties and states. As a new paradigm for neuroscience, neural network models have the potential to incorporate knowledge acquired with single-neuron approaches to help us understand how emergent functional states generate behaviour, cognition and mental disease.
Article
This book is aimed to provide an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks. The application areas are chosen with the following three criteria: 1) expertise or knowledge of the authors; 2) the application areas that have already been transformed by the successful use of deep learning technology, such as speech recognition and computer vision; and 3) the application areas that have the potential to be impacted significantly by deep learning and that have gained concentrated research efforts, including natural language and text processing, information retrieval, and multimodal information processing empowered by multi-task deep learning. In Chapter 1, we provide the background of deep learning, as intrinsically connected to the use of multiple layers of nonlinear transformations to derive features from the sensory signals such as speech and visual images. In the most recent literature, deep learning is embodied also as representation learning, which involves a hierarchy of features or concepts where higher-level representations of them are defined from lower-level ones and where the same lower-level representations help to define higher-level ones. In Chapter 2, a brief historical account of deep learning is presented. In particular, selected chronological development of speech recognition is used to illustrate the recent impact of deep learning that has become a dominant technology in speech recognition industry within only a few years since the start of a collaboration between academic and industrial researchers in applying deep learning to speech recognition. In Chapter 3, a three-way classification scheme for a large body of work in deep learning is developed. We classify a growing number of deep learning techniques into unsupervised, supervised, and hybrid categories, and present qualitative descriptions and a literature survey for each category. From Chapter 4 to Chapter 6, we discuss in detail three popular deep networks and related learning methods, one in each category. Chapter 4 is devoted to deep autoencoders as a prominent example of the unsupervised deep learning techniques. Chapter 5 gives a major example in the hybrid deep network category, which is the discriminative feed-forward neural network for supervised learning with many layers initialized using layer-by-layer generative, unsupervised pre-training. In Chapter 6, deep stacking networks and several of the variants are discussed in detail, which exemplify the discriminative or supervised deep learning techniques in the three-way categorization scheme. In Chapters 7-11, we select a set of typical and successful applications of deep learning in diverse areas of signal and information processing and of applied artificial intelligence. In Chapter 7, we review the applications of deep learning to speech and audio processing, with emphasis on speech recognition organized according to several prominent themes. In Chapters 8, we present recent results of applying deep learning to language modeling and natural language processing. Chapter 9 is devoted to selected applications of deep learning to information retrieval including Web search. In Chapter 10, we cover selected applications of deep learning to image object recognition in computer vision. Selected applications of deep learning to multi-modal processing and multi-task learning are reviewed in Chapter 11. Finally, an epilogue is given in Chapter 12 to summarize what we presented in earlier chapters and to discuss future challenges and directions.
Article
Some problems in neuroscience are nearly solved. For others, solutions are decades away. The current pace of advances in methods forces us to take stock, to ask where we are going, and what we should research next. Copyright © 2015 Elsevier Ltd. All rights reserved.
Article
This monograph provides an overview of general deep learning methodology and its applications to a variety of signal and information processing tasks. The application areas are chosen with the following three criteria in mind: (1) expertise or knowledge of the authors; (2) the application areas that have already been transformed by the successful use of deep learning technology, such as speech recognition and computer vision; and (3) the application areas that have the potential to be impacted significantly by deep learning and that have been experiencing research growth, including natural language and text processing, information retrieval, and multimodal information processing empowered by multi-task deep learning.
Article
We are used to viewing noise as a nuisance in computing systems. This is a pity, since noise will be abundantly available in energy-efficient future nanoscale devices and circuits. I propose here to learn from the way the brain deals with noise, and apparently even benefits from it. Recent theoretical results have provided insight into how this can be achieved: how noise enables networks of spiking neurons to carry out probabilistic inference through sampling and also enables creative problem solving. In addition, noise supports the self-organization of networks of spiking neurons, and learning from rewards. I will sketch here the main ideas and some consequences of these results. I will also describe why these results are paving the way for a qualitative jump in the computational capability and learning performance of neuromorphic networks of spiking neurons with noise, and for other future computing systems that are able to treat noise as a resource.
Article
The MOSFET scaling principles for obtaining simultaneous improvements in transistor density, switching speed, and power dissipation described by Robert H. Dennard and others in "Design of Ion-implanted MOSFETs with Very Small Physical Dimensions" (1974 ) became a roadmap for the semiconductor industry to provide systematic and predictable transistor improvements. New technology generations emerging approximately every three years during the 1970's and 1980's and appearing every other year starting in the mid-1990's, promise to continue although we face growing challenges.
Article
With advances in exfoliation and synthetic techniques, atomically thin films of semiconducting transition metal dichalcogenides have recently been isolated and characterized. Their two-dimensional structure, coupled with a direct band gap in the visible portion of the electromagnetic spectrum, suggests suitability for digital electronics and optoelectronics. Towards that end, several classes of high-performance devices have been reported along with significant progress in understanding their physical properties. Here, we present a review of the architecture, operating principles, and physics of electronic and optoelectronic devices based on ultrathin transition metal dichalcogenide semiconductors. By critically assessing and comparing the performance of these devices with competing technologies, the merits and shortcomings of this emerging class of electronic materials are identified, thereby providing a roadmap for future development.
Article
Activity shapes the structure of neurons and their circuits. Two-photon imaging of CA1 neurons expressing enhanced green fluorescent protein in developing hippocampal slices from rat brains was used to characterize dendritic morphogenesis in response to synaptic activity. High-frequency focal synaptic stimulation induced a period (longer than 30 minutes) of enhanced growth of small filopodia-like protrusions (typically less than 5 micrometers long). Synaptically evoked growth was long-lasting and localized to dendritic regions close (less than 50 micrometers) to the stimulating electrode and was prevented by blockade of N-methyl-D-aspartate receptors. Thus, synaptic activation can produce rapid input-specific changes in dendritic structure. Such persistent structural changes could contribute to the development of neural circuitry.
Article
A fundamental feature of membranes is the lateral diffusion of lipids and proteins. Control of lateral diffusion provides a mechanism for regulating the structure and function of synapses. Single-particle tracking (SPT) has emerged as a powerful way to directly visualize these movements. SPT can reveal complex diffusive behaviors, which can be regulated by neuronal activity over time and space. Such is the case for neurotransmitter receptors, which are transiently stabilized at synapses by scaffolding molecules. This regulation provides new insight into mechanisms by which the dynamic equilibrium of receptor-scaffold assembly can be regulated. We will briefly review here recent data on this mechanism, which ultimately tunes the number of receptors at synapses and therefore synaptic strength.
Article
Human perception has recently been characterized as statistical inference based on noisy and ambiguous sensory inputs. Moreover, suitable neural representations of uncertainty have been identified that could underlie such probabilistic computations. In this review, we argue that learning an internal model of the sensory environment is another key aspect of the same statistical inference procedure and thus perception and learning need to be treated jointly. We review evidence for statistically optimal learning in humans and animals, and re-evaluate possible neural representations of uncertainty based on their potential to support statistically optimal learning. We propose that spontaneous activity can have a functional role in such representations leading to a new, sampling-based, framework of how the cortex represents information and uncertainty.
Article
Principles of Neural Science