Figure 3 - uploaded by Nicholas D. Skuda
Content may be subject to copyright.
Source publication
Resource constrained devices are the building blocks of the internet of things (IoT) era. Since the idea behind IoT is to develop an interconnected environment where the devices are tiny enough to operate with limited resources, several control systems have been built to maintain low energy and area consumption while operating as IoT edge devices....
Contexts in source publication
Context 1
... constrained IoT devices are always in need of devices and techniques that consume less power and energy, as these are the major constraints for compact systems with limited battery supply. Hence, memristor-based spiky neuromorphic computing architectures are particularly attractive as viable solutions for IoT based machine learning. Moreover, several emergent nano-scale devices (memristors included) are being leveraged in such systems that promise lower area and power consumption [7,8,16]. For our part, non-volatile memristors have been used to design synapses that help to ensure low energy consumption while storing synap- tic weights. Different materials and their corresponding devices will exhibit differences in energy consumption. For instance, if we consider H f O x , the energy while the synapse is active, idle and learning is detailed on Table 1. Here, the active phase of the synapse refers to the mode of operation; the idle phase defines the inactive condition and the learning phase includes both the potentiation and the depression of the synaptic weights based on the neuron fires. Each of these synapse states consumes energy depending on the type of memristive device used and the peripheral control circuitry. Mixed-signal CMOS neurons can also be designed for energy and area efficiency, specifically when using CMOS integrate-and- fire neurons consisting of very few transistors [5]. The primary concern in terms of area is the size of the capacitors. Capacitor area can also be mitigated via the use of different memristors that will in turn ensure proper incoming current flow into the neu- ron. The mixed-signal neuron operates in three different phases: accumulation, idle and the f irinд phases. Energy consumed by the neuron during these phases are listed on Table 2. Here the accumu- lation energy refers to the energy consumed while accumulating incoming charge/spikes from the synaptic weights. The idle energy is comparatively low since the neuron's functionality is mostly in- active with the exception of peripheral circuitry. Unlike the synapse phases, the neuron does not consume energy specifically relating to the learning process. Instead, the neuron consumes energy while it produces firing spikes. This energy involves the generation of a post-neuron fire when the accumulated charge crosses over the threshold of that particular neuron. Moreover, the shaping of the firing spike is also an important factor for energy consumption since the generated firing pulses would be fed into the next stage of neuromorphic cores. The neuromorphic architecture (mrDANNA) we consider is mixed-signal in nature. Hence the architecture is significant for low power and area efficiency and includes both the synapse and neuron model described. Several synapses and neurons are gath- ered to turn into some memristive neuromorphic core which we call mrDANNA cores. Each core contains energy efficient memris- tive synapses and an analog IAF neuron. The total layout of the design (shown in Figure 3) consists of 36 placements of mrDANNA cores on the right side of the design with the left side containing a digital implementation of the architecture. The advantage of a mixed-signal design is that the connections with the outside signal are fully digital whereas the integration within the core itself is analog. Hence, the mixed-signal memristive neuromorphic system discussed here is digital between cores and analog within the core. Moreover, the mixed-signal approach is also more energy efficient relative to other digital implementations. Since mixed-signal models provide the opportunity to implement synapse and neuron models with better area and power efficiency, we are inclined towards im- proving the design of our neuromorphic cores for use in multiple different applications including classification, control and anomaly ...
Context 2
... CMOS neurons can also be designed for energy and area efficiency, specifically when using CMOS integrate-and- fire neurons consisting of very few transistors [5]. The primary concern in terms of area is the size of the capacitors. Capacitor area can also be mitigated via the use of different memristors that will in turn ensure proper incoming current flow into the neu- ron. The mixed-signal neuron operates in three different phases: accumulation, idle and the f irinд phases. Energy consumed by the neuron during these phases are listed on Table 2. Here the accumu- lation energy refers to the energy consumed while accumulating incoming charge/spikes from the synaptic weights. The idle energy is comparatively low since the neuron's functionality is mostly in- active with the exception of peripheral circuitry. Unlike the synapse phases, the neuron does not consume energy specifically relating to the learning process. Instead, the neuron consumes energy while it produces firing spikes. This energy involves the generation of a post-neuron fire when the accumulated charge crosses over the threshold of that particular neuron. Moreover, the shaping of the firing spike is also an important factor for energy consumption since the generated firing pulses would be fed into the next stage of neuromorphic cores. The neuromorphic architecture (mrDANNA) we consider is mixed-signal in nature. Hence the architecture is significant for low power and area efficiency and includes both the synapse and neuron model described. Several synapses and neurons are gath- ered to turn into some memristive neuromorphic core which we call mrDANNA cores. Each core contains energy efficient memris- tive synapses and an analog IAF neuron. The total layout of the design (shown in Figure 3) consists of 36 placements of mrDANNA cores on the right side of the design with the left side containing a digital implementation of the architecture. The advantage of a mixed-signal design is that the connections with the outside signal are fully digital whereas the integration within the core itself is analog. Hence, the mixed-signal memristive neuromorphic system discussed here is digital between cores and analog within the core. Moreover, the mixed-signal approach is also more energy efficient relative to other digital implementations. Since mixed-signal models provide the opportunity to implement synapse and neuron models with better area and power efficiency, we are inclined towards im- proving the design of our neuromorphic cores for use in multiple different applications including classification, control and anomaly detection. ...
Context 3
... constrained IoT devices are always in need of devices and techniques that consume less power and energy, as these are the major constraints for compact systems with limited battery supply. Hence, memristor-based spiky neuromorphic computing architectures are particularly attractive as viable solutions for IoT based machine learning. Moreover, several emergent nano-scale devices (memristors included) are being leveraged in such systems that promise lower area and power consumption [7,8,16]. For our part, non-volatile memristors have been used to design synapses that help to ensure low energy consumption while storing synap- tic weights. Different materials and their corresponding devices will exhibit differences in energy consumption. For instance, if we consider H f O x , the energy while the synapse is active, idle and learning is detailed on Table 1. Here, the active phase of the synapse refers to the mode of operation; the idle phase defines the inactive condition and the learning phase includes both the potentiation and the depression of the synaptic weights based on the neuron fires. Each of these synapse states consumes energy depending on the type of memristive device used and the peripheral control circuitry. Mixed-signal CMOS neurons can also be designed for energy and area efficiency, specifically when using CMOS integrate-and- fire neurons consisting of very few transistors [5]. The primary concern in terms of area is the size of the capacitors. Capacitor area can also be mitigated via the use of different memristors that will in turn ensure proper incoming current flow into the neu- ron. The mixed-signal neuron operates in three different phases: accumulation, idle and the f irinд phases. Energy consumed by the neuron during these phases are listed on Table 2. Here the accumu- lation energy refers to the energy consumed while accumulating incoming charge/spikes from the synaptic weights. The idle energy is comparatively low since the neuron's functionality is mostly in- active with the exception of peripheral circuitry. Unlike the synapse phases, the neuron does not consume energy specifically relating to the learning process. Instead, the neuron consumes energy while it produces firing spikes. This energy involves the generation of a post-neuron fire when the accumulated charge crosses over the threshold of that particular neuron. Moreover, the shaping of the firing spike is also an important factor for energy consumption since the generated firing pulses would be fed into the next stage of neuromorphic cores. The neuromorphic architecture (mrDANNA) we consider is mixed-signal in nature. Hence the architecture is significant for low power and area efficiency and includes both the synapse and neuron model described. Several synapses and neurons are gath- ered to turn into some memristive neuromorphic core which we call mrDANNA cores. Each core contains energy efficient memris- tive synapses and an analog IAF neuron. The total layout of the design (shown in Figure 3) consists of 36 placements of mrDANNA cores on the right side of the design with the left side containing a digital implementation of the architecture. The advantage of a mixed-signal design is that the connections with the outside signal are fully digital whereas the integration within the core itself is analog. Hence, the mixed-signal memristive neuromorphic system discussed here is digital between cores and analog within the core. Moreover, the mixed-signal approach is also more energy efficient relative to other digital implementations. Since mixed-signal models provide the opportunity to implement synapse and neuron models with better area and power efficiency, we are inclined towards im- proving the design of our neuromorphic cores for use in multiple different applications including classification, control and anomaly ...
Similar publications
Citations
... This enables parallel update of the synaptic weights of the crossbar array, and training of the crossbar array becomes efficient. In edge computing/edge artificial intelligence (AI), enabling on-chip learning at the edge devices prevents data breach, which can otherwise happen if edge devices only have inference facility [6][7][8]. Since on-chip learning in crossbar array is very efficient due to efficient VMM and outer product operations in it, on-chip learning in crossbar arrays becomes very attractive for edge AI applications [4,6]. ...
Topological-soliton-based devices, like the ferromagnetic domain-wall device, have been proposed as non-volatile memory (NVM) synapses in electronic crossbar arrays for fast and energy-efficient implementation of on-chip learning of neural networks. High linearity and symmetry in the synaptic weight-update characteristic of the device (long-term potentiation (LTP) and long-term depression (LTD)) are important requirements to obtain high classification/ regression accuracy in such an on-chip learning scheme. However, obtaining such linear and symmetric LTP and LTD characteristics in the ferromagnetic domain-wall device has remained a challenge. Here, we first carry out micromagnetic simulations of the device to show that the incorporation of defects at the edges of the device, with the defects having higher perpendicular magnetic anisotropy (PMA) compared to the rest of the ferromagnetic layer, leads to massive improvement in the linearity and symmetry of the LTP and LTD characteristics of the device. This is because these defects act as pinning centres for the domain wall and prevent it from moving during the delay time between two consecutive programming current pulses, which is not the case when the device doesn't have defects. Next, we carry out system-level simulations of two crossbar arrays with synaptic characteristics of domain-wall synapse devices incorporated in them: one without such defects, and one with such defects. For on-chip learning of both Long Short Term Memory Networks (LSTMs) (using a regression task) and Fully Connected Neural Networks (FCNN) (using a classification task), we show improved performance when the domain-wall synapse devices have defects at the edges. We also estimate the energy consumption in these synaptic devices and project their scaling, with respect to on-chip learning in corresponding crossbar arrays.
... This is outlined in the "Implementation details and methods" and "Testing results" sections. 3. We analyze the performance of the virtual neuron on neuromorphic hardware by analyzing the run time using a digital neuromorphic hardware design (Caspian), and we estimate the energy usage of the virtual neuron on a mixed-signal memristor-based hardware design (mrDANNA) 19 . This is presented in the "Testing results" section. ...
... With neuromorphic application-specific integrated circuits, the power required for a particular network execution can be estimated based on the energy required for active and idle neurons and synapses for the duration of the execution. To estimate the power of the virtual neuron design, we used the same method and energy-per-spike values as reported by Chakma et al. 19 for the mrDANNA mixedsignal, memristor-based neuromorphic processor. Using the same number of spikes, neurons, and synapses as reported in the μCaspian simulation, we estimate that a mrDANNA hardware implementation would use ∼ 23 nJ for the average test case run and around ∼ 23 mW for continuous operation. ...
Neuromorphic computers emulate the human brain while being extremely power efficient for computing tasks. In fact, they are poised to be critical for energy-efficient computing in the future. Neuromorphic computers are primarily used in spiking neural network–based machine learning applications. However, they are known to be Turing-complete, and in theory can perform all general-purpose computation. One of the biggest bottlenecks in realizing general-purpose computations on neuromorphic computers today is the inability to efficiently encode data on the neuromorphic computers. To fully realize the potential of neuromorphic computers for energy-efficient general-purpose computing, efficient mechanisms must be devised for encoding numbers. Current encoding mechanisms (e.g., binning, rate-based encoding, and time-based encoding) have limited applicability and are not suited for general-purpose computation. In this paper, we present the virtual neuron abstraction as a mechanism for encoding and adding integers and rational numbers by using spiking neural network primitives. We evaluate the performance of the virtual neuron on physical and simulated neuromorphic hardware. We estimate that the virtual neuron could perform an addition operation using just 23 nJ of energy on average with a mixed-signal, memristor-based neuromorphic processor. We also demonstrate the utility of the virtual neuron by using it in some of the μ-recursive functions, which are the building blocks of general-purpose computation.
... With neuromorphic application-specific integrated circuits, the power required for a particular network execution can be estimated based on the energy required for active and idle neurons and synapses for the duration of the execution. To estimate the power of the virtual neuron design, we used the same method and energy-per-spike values as reported in [48] for the mrDANNA mixed-signal memristor-based neuromorphic processor. Using the same number of spikes, neurons, and synapses as reported in the µCaspian simulation, we estimate that a mrDANNA hardware implementation would use ∼ 23 nJ for the average test case run and around ∼ 23 mW for continuous operation. ...
Neuromorphic computers perform computations by emulating the human brain, and use extremely low power. They are expected to be indispensable for energy-efficient computing in the future. While they are primarily used in spiking neural network-based machine learning applications, neuromorphic computers are known to be Turing-complete, and thus, capable of general-purpose computation. However, to fully realize their potential for general-purpose, energy-efficient computing, it is important to devise efficient mechanisms for encoding numbers. Current encoding approaches have limited applicability and may not be suitable for general-purpose computation. In this paper, we present the virtual neuron as an encoding mechanism for integers and rational numbers. We evaluate the performance of the virtual neuron on physical and simulated neuromorphic hardware and show that it can perform an addition operation using 23 nJ of energy on average using a mixed-signal memristor-based neuromorphic processor. We also demonstrate its utility by using it in some of the mu-recursive functions, which are the building blocks of general-purpose computation.
... Meanwhile, bio-inspired Deep Learning algorithms are becoming more successful, but at the cost of more computing power, prohibiting them from being implemented in local environments constrained by size, weight, and power (SWaP) limits. For these algorithms to be implemented in constrained environments, they will require new energy-efficient hardware implementations [1]- [4]. It's reasonable to expect bio-inspired algorithms to perform better on bio-inspired hardware, where processing and memory are collocated in the hardware's architecture. ...
... The subcircuits of silicon neurons often consist of CMOS components that are exposed to manufacturing inconsistencies [1], [5], [12]. If discrepancies between different instances of a fabricated circuit are unaccounted for, they could cause the circuit to behave less predictably. ...
Analog circuits have proven to be a reliable medium for neuromorphic architectures in silicon, capable of emulating parts of the brain’s computational processes. Information in the brain is shared between neurons in the form of spikes, where a neuron’s soma emits a voltage signal after integrating multiple synaptic input currents. To preserve the quality of this information in a rate-based protocol, it is important that a post-synaptic neuron is able to adapt its firing rate to some desired or expected rate, especially in a noisy environment. This paper presents an analog Izhikevich neuron circuit and a proof-of-concept tuning algorithm that adjusts the neuron’s spike rate using proportional feedback control. The neuron circuit is implemented using discrete surface-mount components on a custom printed circuit board, and its output spike pattern is modified by adjusting one of the neuron’s four voltage parameters. Adjusting the voltage parameters gives a neuromorphic system designer the ability to calibrate these silicon neurons in the presence of noise or component imperfections, ensuring a baseline configuration where all neurons exhibit the same, expected spike response given the same input.
... • EONS can train networks for different hardware implementations. In this work, we focus on one hardware implementation, but EONS has previously been used to train networks directly for memristive [5], digital [9,23], optoelectronic [3], and biomimetic [16] implementations. • EONS can operate within the constraints and characteristics of a hardware implementation and can be used as part of the co-design process in understanding design trade-offs. ...
... The neuromorphic non von Neumann architectures discussed in Section IV are con- sidered to be a promising solution for energy-related issues and optimization of such systems. Moreover, neuromorphic architectures can be used to solve the cloud computing energy- related issues in memory and processing units and to achieve energy-efficient computing [30]. ...
This chapter explores the pivotal role of innovative circuits and memory technologies in advancing neuromorphic computing. As the foundation of neuromorphic systems, these technologies enable the replication of complex neural behaviors and functions of the human brain, pushing the boundaries of computational efficiency and adaptability. The chapter discusses the development of spiking neural network circuits, memristive circuits, photonic neuromorphic circuits, and hybrid and flexible circuits. It also examines the challenges associated with integrating these technologies into cohesive systems that mimic biological neural networks. Through a detailed look at cutting-edge advancements, the chapter underscores the transformative potential of neuromorphic computing in driving future technological innovations and applications.
Many robotic applications require autonomous decision making. The obstacles may be uncertain in nature. Because of the mobility, most robots might be battery operated. This chapter briefs the energy and performance analysis of robotic applications using artificial neural networks. This chapter is designed to understand the operation of robots from understanding sensor data (training), processing (testing) the data in an efficient manner, and respond (prediction) to the dynamic situation using self-learning and adaptability.