Figure 2 - uploaded by Nicholas D. Skuda
Content may be subject to copyright.
I-V characteristics of memristor model. 

I-V characteristics of memristor model. 

Source publication
Conference Paper
Full-text available
Resource constrained devices are the building blocks of the internet of things (IoT) era. Since the idea behind IoT is to develop an interconnected environment where the devices are tiny enough to operate with limited resources, several control systems have been built to maintain low energy and area consumption while operating as IoT edge devices....

Contexts in source publication

Context 1
... memristor model we have used in this paper is based on [1]. This model was developed from on experimental results for nanoscale hafnium-oxide (H f O x ) memristors fabricated within a 65 nm CMOS process [2,3]. Fig. 2 shows the current-voltage rela- tionship for an example H f O x memristor, specifically illustrating the characteristics of the device when switching between the LRS and HRS resistance states. When a voltage greater than a threshold of 1V is applied across the memristor, the device will switch from HRS to LRS. Likewise, an applied voltage less than the negative threshold of about −0.7V will cause the device to switch from LRS to HRS. In both cases, the voltage applied should also be held for at least a minimum "switching time" (typically 10 − 100ns) in order to switch fully between the two extreme resistance states, HRS and LRS. Resistance states between LRS and HRS are also achievable by applying short, nanosecond pulses that essentially "nudge" the resistance. This variety and range of resistance is particularly use- ful for representing synaptic weights in the spiky neuromorphic model considered. The ability to change these resistance values with controlled pulses is also leveraged to implement online learn- ing mechanisms such as spike time-dependent plasticity (STDP) [4]. ...
Context 2
... memristor model we have used in this paper is based on [1]. This model was developed from on experimental results for nanoscale hafnium-oxide (H f O x ) memristors fabricated within a 65 nm CMOS process [2,3]. Fig. 2 shows the current-voltage rela- tionship for an example H f O x memristor, specifically illustrating the characteristics of the device when switching between the LRS and HRS resistance states. When a voltage greater than a threshold of 1V is applied across the memristor, the device will switch from HRS to LRS. Likewise, an applied voltage less than the negative threshold of about −0.7V will cause the device to switch from LRS to HRS. In both cases, the voltage applied should also be held for at least a minimum "switching time" (typically 10 − 100ns) in order to switch fully between the two extreme resistance states, HRS and LRS. Resistance states between LRS and HRS are also achievable by applying short, nanosecond pulses that essentially "nudge" the resistance. This variety and range of resistance is particularly use- ful for representing synaptic weights in the spiky neuromorphic model considered. The ability to change these resistance values with controlled pulses is also leveraged to implement online learn- ing mechanisms such as spike time-dependent plasticity (STDP) [4]. ...
Context 3
... memristor model we have used in this paper is based on [1]. This model was developed from on experimental results for nanoscale hafnium-oxide (H f O x ) memristors fabricated within a 65 nm CMOS process [2,3]. Fig. 2 shows the current-voltage rela- tionship for an example H f O x memristor, specifically illustrating the characteristics of the device when switching between the LRS and HRS resistance states. When a voltage greater than a threshold of 1V is applied across the memristor, the device will switch from HRS to LRS. Likewise, an applied voltage less than the negative threshold of about −0.7V will cause the device to switch from LRS to HRS. In both cases, the voltage applied should also be held for at least a minimum "switching time" (typically 10 − 100ns) in order to switch fully between the two extreme resistance states, HRS and LRS. Resistance states between LRS and HRS are also achievable by applying short, nanosecond pulses that essentially "nudge" the resistance. This variety and range of resistance is particularly use- ful for representing synaptic weights in the spiky neuromorphic model considered. The ability to change these resistance values with controlled pulses is also leveraged to implement online learn- ing mechanisms such as spike time-dependent plasticity (STDP) [4]. ...

Citations

... This enables parallel update of the synaptic weights of the crossbar array, and training of the crossbar array becomes efficient. In edge computing/edge artificial intelligence (AI), enabling on-chip learning at the edge devices prevents data breach, which can otherwise happen if edge devices only have inference facility [6][7][8]. Since on-chip learning in crossbar array is very efficient due to efficient VMM and outer product operations in it, on-chip learning in crossbar arrays becomes very attractive for edge AI applications [4,6]. ...
Article
Full-text available
Topological-soliton-based devices, like the ferromagnetic domain-wall device, have been proposed as non-volatile memory (NVM) synapses in electronic crossbar arrays for fast and energy-efficient implementation of on-chip learning of neural networks. High linearity and symmetry in the synaptic weight-update characteristic of the device (long-term potentiation (LTP) and long-term depression (LTD)) are important requirements to obtain high classification/ regression accuracy in such an on-chip learning scheme. However, obtaining such linear and symmetric LTP and LTD characteristics in the ferromagnetic domain-wall device has remained a challenge. Here, we first carry out micromagnetic simulations of the device to show that the incorporation of defects at the edges of the device, with the defects having higher perpendicular magnetic anisotropy (PMA) compared to the rest of the ferromagnetic layer, leads to massive improvement in the linearity and symmetry of the LTP and LTD characteristics of the device. This is because these defects act as pinning centres for the domain wall and prevent it from moving during the delay time between two consecutive programming current pulses, which is not the case when the device doesn't have defects. Next, we carry out system-level simulations of two crossbar arrays with synaptic characteristics of domain-wall synapse devices incorporated in them: one without such defects, and one with such defects. For on-chip learning of both Long Short Term Memory Networks (LSTMs) (using a regression task) and Fully Connected Neural Networks (FCNN) (using a classification task), we show improved performance when the domain-wall synapse devices have defects at the edges. We also estimate the energy consumption in these synaptic devices and project their scaling, with respect to on-chip learning in corresponding crossbar arrays.
... This is outlined in the "Implementation details and methods" and "Testing results" sections. 3. We analyze the performance of the virtual neuron on neuromorphic hardware by analyzing the run time using a digital neuromorphic hardware design (Caspian), and we estimate the energy usage of the virtual neuron on a mixed-signal memristor-based hardware design (mrDANNA) 19 . This is presented in the "Testing results" section. ...
... With neuromorphic application-specific integrated circuits, the power required for a particular network execution can be estimated based on the energy required for active and idle neurons and synapses for the duration of the execution. To estimate the power of the virtual neuron design, we used the same method and energy-per-spike values as reported by Chakma et al. 19 for the mrDANNA mixedsignal, memristor-based neuromorphic processor. Using the same number of spikes, neurons, and synapses as reported in the μCaspian simulation, we estimate that a mrDANNA hardware implementation would use ∼ 23 nJ for the average test case run and around ∼ 23 mW for continuous operation. ...
Article
Full-text available
Neuromorphic computers emulate the human brain while being extremely power efficient for computing tasks. In fact, they are poised to be critical for energy-efficient computing in the future. Neuromorphic computers are primarily used in spiking neural network–based machine learning applications. However, they are known to be Turing-complete, and in theory can perform all general-purpose computation. One of the biggest bottlenecks in realizing general-purpose computations on neuromorphic computers today is the inability to efficiently encode data on the neuromorphic computers. To fully realize the potential of neuromorphic computers for energy-efficient general-purpose computing, efficient mechanisms must be devised for encoding numbers. Current encoding mechanisms (e.g., binning, rate-based encoding, and time-based encoding) have limited applicability and are not suited for general-purpose computation. In this paper, we present the virtual neuron abstraction as a mechanism for encoding and adding integers and rational numbers by using spiking neural network primitives. We evaluate the performance of the virtual neuron on physical and simulated neuromorphic hardware. We estimate that the virtual neuron could perform an addition operation using just 23 nJ of energy on average with a mixed-signal, memristor-based neuromorphic processor. We also demonstrate the utility of the virtual neuron by using it in some of the μ-recursive functions, which are the building blocks of general-purpose computation.
... With neuromorphic application-specific integrated circuits, the power required for a particular network execution can be estimated based on the energy required for active and idle neurons and synapses for the duration of the execution. To estimate the power of the virtual neuron design, we used the same method and energy-per-spike values as reported in [48] for the mrDANNA mixed-signal memristor-based neuromorphic processor. Using the same number of spikes, neurons, and synapses as reported in the µCaspian simulation, we estimate that a mrDANNA hardware implementation would use ∼ 23 nJ for the average test case run and around ∼ 23 mW for continuous operation. ...
Preprint
Full-text available
Neuromorphic computers perform computations by emulating the human brain, and use extremely low power. They are expected to be indispensable for energy-efficient computing in the future. While they are primarily used in spiking neural network-based machine learning applications, neuromorphic computers are known to be Turing-complete, and thus, capable of general-purpose computation. However, to fully realize their potential for general-purpose, energy-efficient computing, it is important to devise efficient mechanisms for encoding numbers. Current encoding approaches have limited applicability and may not be suitable for general-purpose computation. In this paper, we present the virtual neuron as an encoding mechanism for integers and rational numbers. We evaluate the performance of the virtual neuron on physical and simulated neuromorphic hardware and show that it can perform an addition operation using 23 nJ of energy on average using a mixed-signal memristor-based neuromorphic processor. We also demonstrate its utility by using it in some of the mu-recursive functions, which are the building blocks of general-purpose computation.
... Meanwhile, bio-inspired Deep Learning algorithms are becoming more successful, but at the cost of more computing power, prohibiting them from being implemented in local environments constrained by size, weight, and power (SWaP) limits. For these algorithms to be implemented in constrained environments, they will require new energy-efficient hardware implementations [1]- [4]. It's reasonable to expect bio-inspired algorithms to perform better on bio-inspired hardware, where processing and memory are collocated in the hardware's architecture. ...
... The subcircuits of silicon neurons often consist of CMOS components that are exposed to manufacturing inconsistencies [1], [5], [12]. If discrepancies between different instances of a fabricated circuit are unaccounted for, they could cause the circuit to behave less predictably. ...
Article
Full-text available
Analog circuits have proven to be a reliable medium for neuromorphic architectures in silicon, capable of emulating parts of the brain’s computational processes. Information in the brain is shared between neurons in the form of spikes, where a neuron’s soma emits a voltage signal after integrating multiple synaptic input currents. To preserve the quality of this information in a rate-based protocol, it is important that a post-synaptic neuron is able to adapt its firing rate to some desired or expected rate, especially in a noisy environment. This paper presents an analog Izhikevich neuron circuit and a proof-of-concept tuning algorithm that adjusts the neuron’s spike rate using proportional feedback control. The neuron circuit is implemented using discrete surface-mount components on a custom printed circuit board, and its output spike pattern is modified by adjusting one of the neuron’s four voltage parameters. Adjusting the voltage parameters gives a neuromorphic system designer the ability to calibrate these silicon neurons in the presence of noise or component imperfections, ensuring a baseline configuration where all neurons exhibit the same, expected spike response given the same input.
... • EONS can train networks for different hardware implementations. In this work, we focus on one hardware implementation, but EONS has previously been used to train networks directly for memristive [5], digital [9,23], optoelectronic [3], and biomimetic [16] implementations. • EONS can operate within the constraints and characteristics of a hardware implementation and can be used as part of the co-design process in understanding design trade-offs. ...
... The neuromorphic non von Neumann architectures discussed in Section IV are con- sidered to be a promising solution for energy-related issues and optimization of such systems. Moreover, neuromorphic architectures can be used to solve the cloud computing energy- related issues in memory and processing units and to achieve energy-efficient computing [30]. ...
Chapter
Full-text available
This chapter explores the pivotal role of innovative circuits and memory technologies in advancing neuromorphic computing. As the foundation of neuromorphic systems, these technologies enable the replication of complex neural behaviors and functions of the human brain, pushing the boundaries of computational efficiency and adaptability. The chapter discusses the development of spiking neural network circuits, memristive circuits, photonic neuromorphic circuits, and hybrid and flexible circuits. It also examines the challenges associated with integrating these technologies into cohesive systems that mimic biological neural networks. Through a detailed look at cutting-edge advancements, the chapter underscores the transformative potential of neuromorphic computing in driving future technological innovations and applications.
Chapter
Many robotic applications require autonomous decision making. The obstacles may be uncertain in nature. Because of the mobility, most robots might be battery operated. This chapter briefs the energy and performance analysis of robotic applications using artificial neural networks. This chapter is designed to understand the operation of robots from understanding sensor data (training), processing (testing) the data in an efficient manner, and respond (prediction) to the dynamic situation using self-learning and adaptability.