Conference Paper

Evolving spiking networks with variable memristors

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

Abstract

This paper presents a spiking neuro-evolutionary system which implements memristors as neuromodulatory connections, i.e. whose weights can vary during a trial. The evolutionary design process exploits parameter self-adaptation and a constructionist approach, allowing the number of neurons, connection weights, and inter-neural connectivity pattern to be evolved for each network. Additionally, each memristor has its own conductance profile, which alters the neuromodulatory behaviour of the memristor and may be altered during the application of the GA. We demonstrate that this approach allows the evolutionary process to discover beneficial memristive behaviours at specific points in the networks. We evaluate our approach against two phenomenological real-world memristive implementations, a theoretical "linear memristor", and a system containing standard connections only. Performance is evaluated on a simulated robotic navigation task.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... Memristors are the fourth fundamental circuit element and have been the focus of intense research since the theory [1,2] was recently united with a practical demonstration [3]. As they can hold a state, they may be useful as computer memory [4] and because synapses [2] can be described by the memristor theory, it is anticipated that memristor devices could operate as artificial synapses in neuromorphic computing paradigms [5] and there are several theoretical studies [6,7,8,9,10] that suggest that this approach may be fruitful and ongoing experimental projects [11,12] (including ours) to test it out. ...
... Thus far, we [6,7] and others [8] have focused on utilising memristors in spiking networks, using a bottom up approach to building memristor computers. Such networks are often considered in terms of their use as control systems for autonomous or distributed systems and accordingly we focused on pathfinding by a single autonomous agent. ...
Conference Paper
Full-text available
Memristors are used to compare three gathering techniques in an already-mapped environment where resource locations are known. The All Site model, which apportions gatherers based on the modeled memristance of that path, proves to be good at increasing overall efficiency and decreasing time to fully deplete an environment, however it only works well when the resources are of similar quality. The Leaf Cutter method, based on Leaf Cutter Ant behaviour, assigns all gatherers first to the best resource, and once depleted, uses the All Site model to spread them out amongst the rest. The Leaf Cutter model is better at increasing resource influx in the short-term and vastly out-performs the All Site model in a more varied environments. It is demonstrated that memristor based abstractions of gatherer models provide potential methods for both the comparison and implementation of agent controls.
Chapter
This chapter puts forward the analysis of a novel memristor-based four dimensional (4D) chaotic system without equilibria and its projective synchronization using nonlinear active control technique. The proposed memristor-based novel 4D chaotic system has total nine terms with two nonlinear terms. Different qualitative and quantitative tools: time series, phase plane, Lyapunov exponents, bifurcation diagram, Lyapunov dimension, Poincaré map are used to analyze the proposed memristor-based system. The proposed system has periodic, 2-torus quasi-periodic, chaotic, and chaotic 2-torus attractors, which are confirmed with the calculation of the system's Lyapunov exponents and bifurcation diagram. The proposed chaotic system has thumb and parachute shapes of Poincaré map and satisfy unique and interesting behaviors. Such a memristor-based dissipative chaotic system is not available in the literature. Furthermore, the projective synchronization between memristor-based novel chaotic systems is achieved. The nonlinear active control laws are designed by using the sum of relevant variables of the systems and required global asymptotic stability condition is derived to achieve synchronization. Results are simulated in MATLAB environment and reflect that objectives are achieved successfully.
Article
In 1948 Alan Turing presented a general representation scheme by which to achieve artificial intelligence – his unorganised machines. Further, at the same time as also suggesting that natural evolution may provide inspiration for search, he noted that mechanisms inspired by the cultural aspects of learning may prove useful. This chapter presents results from an investigation into using Turing’s dynamical network representation designed by a new imitation-based, i.e., cultural, approach. Moreover, the original synchronous and an asynchronous form of unorganised machines are considered, along with their implementation in memristive hardware.
Article
Full-text available
Some important aspects of memristor technology, which include theory, device engineering, circuit modeling, digital and analog systems, and neuromorphic systems, are discussed. A new era in research on resistive switching started in the late 1990s, initiated independently by Asamitsu et al. using manganites, Kozicki et al. studying AgYGeSe systems, and Beck et al. investigating titanates and zirconates. A conceptual extension of digital memory is digital logic, and memristors have been proposed and demonstrated for different digital logic applications. An experimental demonstration of chaos with analog components used to build the memristor is presented in . An image encryption application using piecewise linear memristors is also presented. Querlioz et al. showed that memristors utilized in neuromorphic configuration for unsupervised learning can tolerate device variation.
Article
Full-text available
Implementation of a correlation-based learning rule, Spike-Timing- Dependent-Plasticity (STDP), for asynchronous neuromorphic networks is demonstrated using 'memristive' nanodevice. STDP is performed using locally available information at the specific moment of time, for which mapping to crossbar-based CMOS-Nano architectures, such as CMOS-MOLecular (CMOL), is done rather easily. The learning method is dynamic and online in which the synaptic weights are modified based on neural activity. The performance of the proposed method is analyzed for specifically shaped spikes and simulation results are provided for a synapse with STDP properties.
Conference Paper
Full-text available
This paper describes results from a specialised piece of visuo-robotic equipment which allows the artificial evolution of control systems for visually guided autonomous agents acting in the real world. Preliminary experiments with the equipment are described in which dynamical recurrent networks and visual sampling morphologies are concurrently evolved to allow agents to robustly perform simple visually guided tasks. Some of these control systems are shown to exhibit a surprising degree of adaptiveness when tested against generalised versions of the task for which they were evolved.
Conference Paper
Full-text available
This paper presents a Learning Classifier System (LCS) where each traditional rule is represented by a spiking neural network, a type of network with dynamic internal state. The evolutionary design process exploits parameter self-adaptation and a constructionist approach, providing the system with a flexible knowledge representation. It is shown how this approach allows for the evolution of networks of appropriate complexity to emerge whilst solving a continuous maze environment. Additionally, we extend the system to allow for temporal state decomposition. We evaluate our spiking neural LCS against one that uses Multi Layer Perceptron rules.
Chapter
Full-text available
The progressive design method allows for the evolution of complex controllers in complex robots. However, the process is not performed in a complete automatic way as the evolutionary robotics approach aims to have. Instead, a gradual shaping of the controller is performed, where there is a human trainer who directs the learning process, by presenting increasingly complex learning tasks, and deciding the best combination of modules over time, until the final complex goal is reached. This process of human shaping seems unavoidable to us if a complex robot body, sensors/actuators, environment and task are imposed before hand. This point has also been suggested by other researchers (Urzelai et al. 1998; Muthuraman et al., 2003). But, different to other approaches, progressive design, by implementing modularity at the level of devices and also at the level of learning, allows for a better flexibility in terms of shaping of complex robots. The main reason is that, due to the modularization at the level of device, the designer can select at any evolutionary stage which small group of sensors and actuators will participate, under which task, and only evolve those modules. This would not be possible on a complex robot if modularization at the level of behavior is used. Progressive design can be seen as an implementation of the incremental evolution technique but with a better control of who is learning what, at each stage of the evolutionary process. If incremental evolution were used on a controller with several inputs and outputs which controls every aspect of the robot, it would be possible to produce genetic linkage by which, learning some behaviors on early stages would prevent learning other behaviors in following steps, because the controller is so biased that it cannot recover from. This effect may be specially important in complex robots where several motors have to be coordinated. The learning of one coordination task may prevent the learning of another different one. Instead, the use of progressive design allows for the evolution of only those parts required for the task that they are required. This allows a more flexible design. In the case of the Khepera robot, when the results obtained in the evolution of the eleven modules in one single stage process are compared with the results obtained by three stages,
Article
Full-text available
Cyberbotics Ltd. develops WebotsTM, a mobile robotics simulation software that provides you with a rapid prototyping environment for modelling, programming and simulating mobile robots. The provided robot libraries enable you to transfer your control programs to several commercially available real mobile robots. WebotsTM lets you define and modify a complete mobile robotics setup, even several different robots sharing the same environment. For each object, you can define a number of properties, such as shape, color, texture, mass, friction, etc. You can equip each robot with a large number of available sensors and actuators. You can program these robots using your favorite development environment, simulate them and optionally transfer the resulting programs onto your real robots. WebotsTM has been developed in collaboration with the Swiss Federal Institute of Technology in Lausanne, thoroughly tested, well documented and continuously maintained for over 7 years. It is now the main commercial product available from Cyberbotics Ltd.
Conference Paper
Full-text available
The search for new nonvolatile universal memories is propelled by the need for pushing power-efficient nanocomputing to the next higher level. As a potential contender for the next-generation memory technology of choice, the recently found "the missing fourth circuit element", memristor, has drawn a great deal of research interests. In this paper, we characterize the fundamental electrical properties of memristor devices by encapsulating them into a set of compact closed-form expressions. Our derivations provide valuable design insights and allow a deeper understanding of key design implications of memristor-based memories. In particular, we investigate the design of read and write circuits and analyze data integrity and noise-tolerance issues.
Conference Paper
Full-text available
This report presents a survey of computer based simulators for unmanned vehicles. The simulators examined cover a wide spectrum of vehicles including unmanned aerial vehicles, both full scale and micro size; unmanned surface and subsurface vehicles; and unmanned ground vehicles. The majority of simulators use simple numerical simulation and simplistic visualization using custom OpenGL code. An emerging trend is to used modified commercial game engines for physical simulation and visualization. The game engines that are commercially available today are capable of physical simulations providing basic physical properties and interactions between objects. Newer and/or specialized engines such as the flight simulator X-Plane or Ageia PhysX and Havok physics engines, are capable of simulating more complex physical interactions between objects. Researchers in need of a simulator have a choice of using game engines or available open source and commercially available simulators, allowing resources to be focused on research instead of building a new simulator. We conclude that it is no longer necessary to build a new simulator from scratch.
Conference Paper
Full-text available
Several gradient-based methods have been developed for Artificial Neural Network (ANN) training. Still, in some situations, such procedures may lead to local minima, making Evolutionary Algorithms (EAs) a promising alternative. In this work, EAs using direct representations are applied to several classification and regressionANN learning tasks. Furthermore, EAs are also combined with local optimization, under the Lamarckian framework. Both strategies are compared with conventional training methods. The results reveal an enhanced performance by a macro-mutation based Lamarckian approach.
Article
Full-text available
Artificial neural networks are applied to many real-world problems, ranging from pattern classification to robot control. In order to design a neural network for a particular task, the choice of an architecture (including the choice of a neuron model), and the choice of a learning algorithm have to be addressed. Evolutionary search methods can provide an automatic solution to these problems. New insights in both neuroscience and evolutionary biology have led to the development of increasingly powerful neuroevolution techniques over the last decade. This paper gives an overview of the most prominent methods for evolving artificial neural networks with a special focus on recent advances in the synthesis of learning architectures.
Article
Full-text available
Interdisciplinary research broadens the view of particular problems yielding fresh and possibly unexpected insights. This is the case of neuromorphic engineering where technology and neuroscience cross-fertilize each other. For example, consider on one side the recently discovered memristor, postulated in 1974, thanks to research in nanotechnology electronics. On the other side, consider the mechanism known as Spike-Time-Dependent-Plasticity (STDP) which describes a neuronal synaptic learning mechanism that outperforms the traditional Hebbian synaptic plasticity proposed in 1949. STDP was originally postulated as a computer learning algorithm, and is being used by the machine intelligence and computational neuroscience community. At the same time its biological and physiological foundations have been reasonably well established during the past decade. If memristance and STDP can be related, then (a) recent discoveries in nanophysics and nanoelectronic principles may shed new lights into understanding the intricate molecular and physiological mechanisms behind STDP in neuroscience, and (b) new neuromorphic-like computers built out of nanotechnology memristive devices could incorporate the biological STDP mechanisms yielding a new generation of self-adaptive ultra-high-dense intelligent machines. Here we show that by combining memristance models with the electrical wave signals of neural impulses (spikes) converging from pre- and post-synaptic neurons into a synaptic junction, STDP behavior emerges naturally. This result serves to understand how neural and memristance parameters modulate STDP, which might bring new insights to neurophysiologists in searching for the ultimate physiological mechanisms responsible for STDP in biological synapses. At the same time, this result also provides a direct mean to incorporate STDP learning mechanisms into a new generation of nanotechnology computers employing memristors.
Article
Full-text available
How do minds emerge from developing brains? According to "neural constructivism," the representational features of cortex are built from the dynamic interaction between neural growth mechanisms and environmentally derived neural activity. Contrary to popular selectionist models that emphasize regressive mechanisms, the neurobiological evidence suggests that this growth is a progressive increase in the representational properties of cortex. The interaction between the environment and neural growth results in a flexible type of learning: "constructive learning" minimizes the need for prespecification in accordance with recent neurobiological evidence that the developing cerebral cortex is largely free of domain-specific structure. Instead, the representational properties of cortex are built by the nature of the problem domain confronting it. This uniquely powerful and general learning strategy undermines the central assumption of classical learnability theory, that the learning properties of a system can be deduced from a fixed computational architecture. Neural constructivism suggests that the evolutionary emergence of neocortex in mammals is a progression toward more flexible representational structures, in contrast to the popular view of cortical evolution as an increase in innate, specialized circuits. Human cortical postnatal development is also more extensive and protracted than generally supposed, suggesting that cortex has evolved so as to maximize the capacity of environmental structure to shape its structure and function through constructive learning.
Article
Full-text available
Neurons are often considered to be the computational engines of the brain, with synapses acting solely as conveyers of information. But the diverse types of synaptic plasticity and the range of timescales over which they operate suggest that synapses have a more active role in information processing. Long-term changes in the transmission properties of synapses provide a physiological substrate for learning and memory, whereas short-term changes support a variety of computations. By expressing several forms of synaptic plasticity, a single neuron can convey an array of different signals to the neural circuit in which it operates.
Article
Full-text available
This article investigates the evolution of autonomous agents that perform a memory-dependent counting task. Two types of neurocontrollers are evolved: networks of McCulloch-Pitts neurons, and spiking integrate-and-fire networks. The results demonstrate the superiority of the spiky model in evolutionary success and network simplicity. The combination of spiking dynamics with incremental evolution leads to the successful evolution of agents counting over very long periods. Analysis of the evolved networks unravels the counting mechanism and demonstrates how the spiking dynamics are utilized. Using new measures of spikiness we find that even in agents with spiking dynamics, these are usually truly utilized only when they are really needed, that is, in the evolved subnetwork responsible for counting.
Conference Paper
Full-text available
Artificial Neural Networks for online learning problems are often implemented with synaptic plasticity to achieve adaptive behaviour. A common problem is that the overall learning dynamics are emergent properties strongly dependent on the correct combination of neural architectures, plasticity rules and environmental features. Which complexity in architectures and learning rules is required to match specific control and learning problems is not clear. Here a set of homosynaptic plasticity rules is applied to topologically unconstrained neural controllers while operating and evolving in dynamic reward-based scenarios. Performances are monitored on simulations of bee foraging problems and T-maze navigation. Varying reward locations compel the neural controllers to adapt their foraging strategies over time, fostering online reward-based learning. In contrast to previous studies, the results here indicate that reward-based learning in complex dynamic scenarios can be achieved with basic plasticity rules and minimal topologies.
Article
Full-text available
The synapsing variable-length crossover (SVLC) algorithm provides a biologically inspired method for performing meaningful crossover between variable-length genomes. In addition to providing a rationale for variable-length crossover, it also provides a genotypic similarity metric for variable-length genomes, enabling standard niche formation techniques to be used with variable-length genomes. Unlike other variable-length crossover techniques which consider genomes to be rigid inflexible arrays and where some or all of the crossover points are randomly selected, the SVLC algorithm considers genomes to be flexible and chooses nonrandom crossover points based on the common parental sequence similarity. The SVLC algorithm recurrently "glues" or synapses homogenous genetic subsequences together. This is done in such a way that common parental sequences are automatically preserved in the offspring with only the genetic differences being exchanged or removed, independent of the length of such differences. In a variable-length test problem, the SVLC algorithm compares favorably with current variable-length crossover techniques. The variable-length approach is further advocated by demonstrating how a variable-length genetic algorithm (GA) can obtain a high fitness solution in fewer iterations than a traditional fixed-length GA in a two-dimensional vector approximation task
Article
Full-text available
We report the fabrication and properties of a polymeric memristor, i.e. an electronic element with memory of its previous history. We show how this element can be viewed as a functional analog of a synaptic junction and how it can be used as a critical node in adaptive networks capable of bioinspired intelligent signal processing.
Article
A new family of crystalline oxides is identified that provide a method of producing complementary memristance (both n and p-type having been demonstrated) with unusually large demonstrated memristance behavior. To the best of our knowledge, these are the only devices having a large enough memristance to have measureable memristance at the macroscopic (10's to 100's of um device size) scale. Additionally, the oxides are highly conducting (low loss) with resistivities for both n and p-type variants in the -5E-4 ohm-cm range. Complementary oxide memristors (both n-type and p-type) have been demonstrated in the same material contrasting all other known memristor technologies which are unipolar. Such behavior could be useful in future neuromorphic computing since n-type material exhibits inhibitory synaptic response (increasing resistance with time/voltage) while p-type material exhibits excitatory synaptic response (decreasing resistance with time/voltage). In principle (not yet demonstrated) this core complementary technology can fully implement neuron/synapse brain function without the need for traditional CMOS.
Article
Diverse, complex, and adaptive animal behaviors are achieved by organizing hierarchically structured controllers in motor systems. The levels of control progress from simple spinal reflexes and central pattern generators through to executive cognitive control in the frontal cortex. Various types of hierar-chical control structures have been introduced and shown to be effective in past artificial agent mod-els, but few studies have shown how such structures can self-organize. This study describes how such hierarchical control may evolve in a simple recurrent neural network model implemented in a mobile robot. Topological constraints on information flow are found to improve system performance by decreasing interference between different parts of the network. One part becomes responsible for generating lower behavior primitives while another part evolves top-down sequencing of the primitives for achieving global goals. Fast and slow neuronal response dynamics are automatically generated in specific neurons of the lower and the higher levels, respectively. A hierarchical neural network is shown to outperform a comparable single-level network in controlling a mobile robot.
Article
An architecture for nano-electronic computation based on crossbars of hysteretic resistors is presented. We show how such crossbars can implement inverting and non-inverting latches and sum-of-product logic functions, and give examples of a NAND gate, exclusive-OR gate, and half adder. Multiple hysteretic resistor crossbars may be combined to implement complex computational systems. The designs have been evaluated using SPICE (a general-purpose circuit simulation program), demonstrating the feasibility of implementation given a suitable nano-electronic substrate.
Article
The computational power of formal models for networks of spiking neurons is compared with that of other neural network models based on McCulloch Pitts neurons (i.e., threshold gates), respectively, sigmoidal gates. In particular it is shown that networks of spiking neurons are, with regard to the number of neurons that are needed, computationally more powerful than these other neural network models. A concrete biologically relevant function is exhibited which can be computed by a single spiking neuron (for biologically reasonable values of its parameters), but which requires hundreds of hidden units on a sigmoidal neural net. On the other hand, it is known that any function that can be computed by a small sigmoidal neural net can also be computed by a small network of spiking neurons. This article does not assume prior knowledge about spiking neurons, and it contains an extensive list of references to the currently available literature on computations in networks of spiking neurons and relevant results from neurobiology.
Book
Neurons in the brain communicate by short electrical pulses, the so-called action potentials or spikes. How can we understand the process of spike generation? How can we understand information transmission by neurons? What happens if thousands of neurons are coupled together in a seemingly random network? How does the network connectivity determine the activity patterns? And, vice versa, how does the spike activity influence the connectivity pattern? These questions are addressed in this 2002 introduction to spiking neurons aimed at those taking courses in computational neuroscience, theoretical biology, biophysics, or neural networks. The approach will suit students of physics, mathematics, or computer science; it will also be useful for biologists who are interested in mathematical modelling. The text is enhanced by many worked examples and illustrations. There are no mathematical prerequisites beyond what the audience would meet as undergraduates: more advanced techniques are introduced in an elementary, concrete fashion when needed.
Article
A memristor is a two-terminal electronic device whose conductance can be precisely modulated by charge or flux through it. Here we experimentally demonstrate a nanoscale silicon-based memristor device and show that a hybrid system composed of complementary metal-oxide semiconductor neurons and memristor synapses can support important synaptic functions such as spike timing dependent plasticity. Using memristors as synapses in neuromorphic circuits can potentially offer both high connectivity and high density required for efficient computing.
Article
This paper is concerned with adaptation capabilities of evolved neural controllers. We propose to evolve mechanisms for parameter self-organization instead of evolving the parameters themselves. The method consists of encoding a set of local adaptation rules that synapses follow while the robot freely moves in the environment. In the experiments presented here, the performance of the robot is measured in environments that are different in significant ways from those used during evolution. The results show that evolutionary adaptive controllers solve the task much faster and better than evolutionary standard fixed-weight controllers, that the method scales up well to large architectures, and that evolutionary adaptive controllers can adapt to environmental changes that involve new sensory characteristics (including transfer from simulation to reality and across different robotic platforms) and new spatial relationships.
Article
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task. We claim that the increased efficiency is due to (1) employing a principled method of crossover of different topologies, (2) protecting structural innovation using speciation, and (3) incrementally growing from minimal structure. We test this claim through a series of ablation studies that demonstrate that each component is necessary to the system as a whole and to each other. What results is significantly faster learning. NEAT is also an important contribution to GAs because it shows how it is possible for evolution to both optimize and complexify solutions simultaneously, offering the possibility of evolving increasingly complex solutions over generations, and strengthening the analogy with biological evolution.
Article
In this paper a phenomenological model of spike-timing dependent synaptic plasticity (STDP) is developed that is based on a Volterra series-like expansion. Synaptic weight changes as a function of the relative timing of pre- and postsynaptic spikes are described by integral kernels that can easily be inferred from experimental data. The resulting weight dynamics can be stated in terms of statistical properties of pre- and postsynaptic spike trains. Generalizations to neurons that fire two different types of action potentials, such as cerebellar Purkinje cells where synaptic plasticity depends on correlations in two distinct presynaptic fibers, are discussed. We show that synaptic plasticity, together with strictly local bounds for the weights, can result in synaptic competition that is required for any form of pattern formation. This is illustrated by a concrete example where a single neuron equipped with STDP can selectively strengthen those synapses with presynaptic neurons that reliably deliver precisely timed spikes at the expense of other synapses which transmit spikes with a broad temporal distribution. Such a mechanism may be of vital importance for any neuronal system where information is coded in the timing of individual action potentials.
Article
Anyone who ever took an electronics laboratory class will be familiar with the fundamental passive circuit elements: the resistor, the capacitor and the inductor. However, in 1971 Leon Chua reasoned from symmetry arguments that there should be a fourth fundamental element, which he called a memristor (short for memory resistor). Although he showed that such an element has many interesting and valuable circuit properties, until now no one has presented either a useful physical model or an example of a memristor. Here we show, using a simple analytical example, that memristance arises naturally in nanoscale systems in which solid-state electronic and ionic transport are coupled under an external bias voltage. These results serve as the foundation for understanding a wide range of hysteretic current-voltage behaviour observed in many nanoscale electronic devices that involve the motion of charged atomic or molecular species, in particular certain titanium dioxide cross-point switches.
Conference Paper
The neuromorphic paradigm is attractive for nanoscale computation because of its massive parallelism, potential scalability, and inherent defect-, fault-, and failure-tolerance. We show how to implement timing-based learning laws, such as spike-timing-dependent plasticity (STDP), in simple, memristive nanodevices, such as those constructed from certain metal oxides. Such nano-scale ldquosynapsesrdquo can be combined with CMOS ldquoneuronsrdquo to create neuromorphic hardware several orders of magnitude denser than is possible in conventional CMOS. The key ideas are: (1) to factor out two synaptic state variables to pre- and post-synaptic neurons; and (2) to separate computational communication from learning by time-division multiplexing of pulse-width-modulated signals through synapses. This approach offers the advantages of: better control over power dissipation; fewer constraints on the design of memristive materials used for nanoscale synapses; learning dynamics can be dynamically turned on or off (e.g. by attentional priming mechanisms communicated extra-synaptically); greater control over the precise form and timing of the STDP equations; the ability to implement a variety of other learning laws besides STDP; better circuit diversity since the approach allows different learning laws to be implemented in different areas of a single chip using the same memristive material for all synapses.
Article
It is shown that for many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those using digital methods. This advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals rather than by the absolute values of digital signals. This approach requires adaptive techniques to mitigate the effects of component differences. This kind of adaptation leads naturally to systems that learn about their environment. Large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power. For this reason, adaptive analog technology can be expected to utilize the full potential of wafer-scale silicon fabrication
Article
In most neural network models, synapses are treated as static weights that change only with the slow time scales of learning. It is well known, however, that synapses are highly dynamic and show use-dependent plasticity over a wide range of time scales. Moreover, synaptic transmission is an inherently stochastic process: a spike arriving at a presynaptic terminal triggers the release of a vesicle of neurotransmitter from a release site with a probability that can be much less than one. We consider a simple model for dynamic stochastic synapses that can easily be integrated into common models for networks of integrate-andfire neurons (spiking neurons). The parameters of this model have direct interpretations in terms of synaptic physiology. We investigate the consequences of the model for computing with individual spikes and demonstrate through rigorous theoretical results that the computational power of the network is increased through the use of dynamic synapses.
An adaptive 'adaline' neuron using chemical 'memistors
  • B Widrow
B. Widrow. An adaptive 'adaline' neuron using chemical 'memistors'. Technical Report 1533-2, Stanford Electronics Laboratories, Stanford, CA, oct 1960.
Parallel Distributed Processing
  • D Rumelhart
  • J Mcclelland
D. Rumelhart and J. McClelland. Parallel Distributed Processing, volume 1 & 2. MIT Press, Cambridge, MA, 1986.
New York , 1949 . D. O. Hebb. The organisation of behavior
  • D O Hebb
  • Hebb D. O.
Digital integrated circuits: a design perspective
  • J M Rabaey
  • Rabaey J. M.