Fig 1 - uploaded by Prasanna Date
Content may be subject to copyright.
| Comparison of the von Neumann architecture with the neuromorphic architecture. These two architectures have some fundamental differences when it comes to operation, organization, programming, communication, and timing, as depicted here.
Source publication
Neuromorphic computing technologies will be important for the future of computing, but much of the work in neuromorphic computing has focused on hardware development. Here, we review recent results in neuromorphic computing algorithms and applications. We highlight characteristics of neuromorphic computing technologies that make them attractive for...
Citations
... Considering the substantial energy consumption and the approaching downscaling limits, the need for alternative computational paradigms has been widely recognized. Consequently, unconventional computing now stands as an interdisciplinary frontier in scientific exploration [4][5][6]. A leading methodology in this domain is physical reservoir computing [7][8][9][10][11][12]. ...
Quantum reservoir computing (QRC) is a brain-inspired computational paradigm that exploits the natural dynamics of a quantum system for information processing. To date, a multitude of quantum systems have been utilized in QRC, with diverse computational capabilities demonstrated accordingly. This study proposes a reciprocal research direction: Probing quantum systems themselves through their information processing performance in the QRC framework. Building upon this concept, here we develop quantum reservoir probing (QRP), an inverse extension of the QRC. The QRP establishes an operator-level linkage between physical properties and computational performance. A systematic scan of this correspondence reveals the intrinsic quantum dynamics of the reservoir system from both computational and informational perspectives. Unifying quantum information and quantum matter, the QRP holds great promise as a potent tool for exploring various aspects of quantum many-body physics. In this study, we specifically apply it to analyze information propagation in a one-dimensional quantum Ising chain. We demonstrate that the QRP not only distinguishes between ballistic and diffusive information propagation, reflecting the system’s dynamical characteristics, but also identifies system-specific information propagation channels, a distinct advantage over conventional methods.
... Despite recent exciting advances, those computer-science-oriented photonic approaches merely emulate the connectivity and functionalities of biological systems, while failing to replicate the asynchronous, event-driven, and dynamic processing capacities of human brains. In contrast, the neuroscience-oriented photonic spiking neural network (PSNN) works in a more biologically plausible way with collocated memory and computation, event-based information coding and processing, and synaptic plasticity learning 13 . PSNNs are particularly well-suited for processing dynamic spatiotemporal information in an efficient spike-driven fashion, making them ideal for time-varying application scenarios, although their development remains in its early stages. ...
Neuromorphic photonic computing represents a paradigm shift for next-generation machine intelligence, yet critical gaps persist in emulating the brain's event-driven, asynchronous dynamics,a fundamental barrier to unlocking its full potential. Here, we report a milestone advancement of a photonic spiking neural network (PSNN) chip, the first to achieve full-stack brain-inspired computing on a complementary metal oxide semiconductor-compatible silicon platform. The PSNN features transformative innovations of gigahertz-scale nonlinear spiking dynamics,in situ learning capacity with supervised synaptic plasticity, and informative event representations with retina-inspired spike encoding, resolving the long-standing challenges in spatiotemporal data integration and energy-efficient dynamic processing. By leveraging its frame-free, event-driven working manner,the neuromorphic optoelectronic system achieves 80% accuracy on the KTH video recognition dataset while operating at ~100x faster processing speeds than conventional frame-based approaches. This work represents a leap for neuromorphic computing in a scalable photonic platform with low latency and high throughput, paving the way for advanced applications in real-time dynamic vision processing and adaptive decision-making, such as autonomous vehicles and robotic navigation.
... With neuromorphic computing embracing generic neural networks [10], [19], [32], our method is positioned to substitute SNNs as the dominant method for event-based vision on neuromorphic processors, when high performance is required with efficient computing. Employing comprehensive hardware simulation, our study explores the efficiency gain of SEED's hardware-aware design. ...
Leveraging the high temporal resolution and dynamic range, object detection with event cameras can enhance the performance and safety of automotive and robotics applications in real-world scenarios. However, processing sparse event data requires compute-intensive convolutional recurrent units, complicating their integration into resource-constrained edge applications. Here, we propose the Sparse Event-based Efficient Detector (SEED) for efficient event-based object detection on neuromorphic processors. We introduce sparse convolutional recurrent learning, which achieves over 92% activation sparsity in recurrent processing, vastly reducing the cost for spatiotemporal reasoning on sparse event data. We validated our method on Prophesee's 1 Mpx and Gen1 event-based object detection datasets. Notably, SEED sets a new benchmark in computational efficiency for event-based object detection which requires long-term temporal learning. Compared to state-of-the-art methods, SEED significantly reduces synaptic operations while delivering higher or same-level mAP. Our hardware simulations showcase the critical role of SEED's hardware-aware design in achieving energy-efficient and low-latency neuromorphic processing.
... Continued advancements in material engineering and fabrication processes are expected to further optimize ReRAM's performance, solidifying its role as a superior memory solution for RFID and other ultra-low-power embedded applications. When combined with emerging ReRAM and crossbar array technologies, these architectures enable ultra-low-power systems by reducing redundant data movement and allowing real-time adaptation to sensory inputs [25], [26]. ...
Current RFID circuits, designed primarily for basic low-power communication and data storage, are not suitable to meet the computational needs of future AI-based IoT applications. While effective for simple identification tasks, these systems fall short in supporting advanced data processing and on-chip intelligence. Next-generation neuromorphic RFID circuits are expected to dynamically adapt based on external inputs and emulate biological neuron activity, paving the way for intelligent, low-power, and autonomous devices. This paper explores the potential of neuromorphic RFID systems driven by memristor-based architectures, leveraging ReRAM technology and crossbar arrays. ReRAM offers key advantages, including reduced energy consumption, essential for enabling local processing and real-time decision-making in intelligent RFID nodes. To demonstrate this potential, a 2W2 crossbar circuit was designed and simulated in LTspice using Bioleks memristor model. The analysis examined the circuits response to read and EPC-like inputs, state variable dynamics, and digital output behavior. Operating at microwatt-level power consumption and capable of processing sensor signals, the proposed architecture shows promise as a foundational building block for future low-power, intelligent, and autonomous RFID systems.
... Mainly, this design makes SNN an event-driven model [7]. The activity in the hardware and energy usage happens mostly during spikes, which are relatively sparse. ...
... While research on neuromorphic computing and SNNs has grown considerably in the last decade, works that focus on benchmarking energy efficiency are still relatively niche [22,7]. Therefore, the benchmarking literature is still not extensive, and papers that focus on energy measurement and estimation in this field are rare. ...
Spiking Neural Networks (SNNs) and neuromorphic computing present a promising alternative to traditional Artificial Neural Networks (ANNs) by significantly improving energy efficiency, particularly in edge and implantable devices. However, assessing the energy performance of SNN models remains a challenge due to the lack of standardized and actionable metrics and the difficulty of measuring energy consumption in experimental neuromorphic hardware. In this paper, we conduct a preliminary exploratory study of energy efficiency metrics proposed in the SNN benchmarking literature. We classify 13 commonly used metrics based on four key properties: Accessibility, Fidelity, Actionability, and Trend-Based analysis. Our findings indicate that while many existing metrics provide useful comparisons between architectures, they often lack practical insights for SNN developers. Notably, we identify a gap between accessible and high-fidelity metrics, limiting early-stage energy assessment. Additionally, we emphasize the lack of metrics that provide practitioners with actionable insights, making it difficult to guide energy-efficient SNN development. To address these challenges, we outline research directions for bridging accessibility and fidelity and finding new Actionable metrics for implantable neuromorphic devices, introducing more Trend-Based metrics, metrics that reflect changes in power requirements, battery-aware metrics, and improving energy-performance tradeoff assessments. The results from this paper pave the way for future research on enhancing energy metrics and their Actionability for SNNs.
... With the advent of the post-Moore era, traditional computing chips based on the von Neumann architecture are facing problems such as the wall of power consumption and memory. The way to increase their computing power is increasingly challenging [1][2][3]. With the rising artificial intelligence (AI) techniques based on artificial neural networks, the demand of AI model scalability and the computing power efficiency for training are increasing every year, especially in the field of medicine, automatic driving, and Internet of things [4][5][6]. ...
The increasing scalability of artificial intelligence (AI) algorithms has a high demand for efficient computing architectures and physical platforms. Intelligent photonic computing with high parallelism, high speed, low latency, and multi‐dimensional data modulation properties shows its potential for dealing with current challenges of energy consumption in both computing clouds and edge devices. The implementation of photonic devices and physical computing systems for AI computing was reviewed from free‐space to on‐chip platforms. Firstly, the mathematics and physics of AI algorithms including feedforward neural network, recurrent neural network, and spiking neural network were introduced. Secondly, principles of devices and applications of new architectures including 3D and 2D metasurfaces which are based on diffractive deep neural network and on‐chip waveguide devices including Mach–Zehnder interferometers and microring resonators were summarised. Thirdly, two emerging fields of AI computing including reconfigurability and non‐linearity were surveyed, which are important factors of achieving in situ training and backpropagation towards the path of general AI computing. Finally, mainstream intelligent photonic computing platforms were compared, and the challenges were outlook.
... computing architectures have been proposed to meet the computational demands of deep learning. One notable approach involves utilizing crossbar array (CBA) processes to fabricate arrays of two-terminal vertical memristor devices, enabling the hardware-level realization of neural network operations [10][11][12][13][14]. Central to neuromorphic computing, this strategy relies on two key physical mechanisms: (1) When input data is vectorized and applied as voltage pulses to each wordline, the memristor at each crosspoint responds according to Ohm's law, corresponding to the multiplication of entries in VMM operations; (2) The currents flowing through the bitlines are summed according to Kirchhoff's current law. ...
Hardware architectures inspired by neural networks offer an effective approach to overcoming the limitations of conventional processors, particularly in high‐dimensional and energy‐efficient deep learning tasks. Among these, two‐terminal vertical memristors have gained attention as compact units capable of emulating synapses, neurons, and dendritic functions. Their intrinsic time‐dependent switching enables dynamic computation, which is essential for real‐time and large‐scale processing of unstructured data. However, the lack of theoretical guidance remains a barrier to scaling such bio‐inspired functions into integrated systems. This review provides a structured overview of approaches for analyzing and designing memristive systems applicable to neuromorphic hardware. It covers theoretical investigations based on first‐principles calculations, numerical simulations of device behavior, and physical modeling techniques for representative memristor devices. In addition, it discusses device‐level characteristics relevant to neural signal processing and their implications for system integration. Collectively, the review offers a comprehensive foundation for the development of materials and architectures that support efficient and scalable computing in next‐generation computings.
... Together, these features create an extremely energy-efficient processing paradigm. SNNs can be deployed on neuromorphic chips designed specifically to execute them [14]. Several designs have been developed; for example, DYNAP-SE [15], Loihi [16], Akida [17], and TrueNorth [18]. ...
... • For ternary neurons with symmetric thresholds, where v thn = v thp , we have p + = p − (according to Equations 18 and 12. Thus, based on Equation 14, regardless of m(t), the expected gradient becomes zero. ...
We propose a new ternary spiking neuron model to improve the representation capacity of binary spiking neurons in deep Q-learning. Although a ternary neuron model has recently been introduced to overcome the limited representation capacity offered by the binary spiking neurons, we show that its performance is worse than that of binary models in deep Q-learning tasks. We hypothesize gradient estimation bias during the training process as the underlying potential cause through mathematical and empirical analysis. We propose a novel ternary spiking neuron model to mitigate this issue by reducing the estimation bias. We use the proposed ternary spiking neuron as the fundamental computing unit in a deep spiking Q-learning network (DSQN) and evaluate the network's performance in seven Atari games from the Gym environment. Results show that the proposed ternary spiking neuron mitigates the drastic performance degradation of ternary neurons in Q-learning tasks and improves the network performance compared to the existing binary neurons, making DSQN a more practical solution for on-board autonomous decision-making tasks.
... Stochasticity is another key property of our network, implementing the precision of inference and allowing it to strike a balance between stability and flexibility. This inherent stochasticity yields an exceptional fit with energy-efficient neuromorphic architectures [Schuman et al., 2022], particularly within the emerging field of thermodynamic computing [Melanson et al., 2025]. Finally, the recursive nature of our FEP-based formal framework provides a principled way to build hierarchical, multiscale attractor networks which may be exploited for boosting the efficiency, robustness and explainability of large-scale AI systems. ...
Attractor dynamics are a hallmark of many complex systems, including the brain. Understanding how such self-organizing dynamics emerge from first principles is crucial for advancing our understanding of neuronal computations and the design of artificial intelligence systems. Here we formalize how attractor networks emerge from the free energy principle applied to a universal partitioning of random dynamical systems. Our approach obviates the need for explicitly imposed learning and inference rules and identifies emergent, but efficient and biologically plausible inference and learning dynamics for such self-organizing systems. These result in a collective, multi-level Bayesian active inference process. Attractors on the free energy landscape encode prior beliefs; inference integrates sensory data into posterior beliefs; and learning fine-tunes couplings to minimize long-term surprise. Analytically and via simulations, we establish that the proposed networks favor approximately orthogonalized attractor representations, a consequence of simultaneously optimizing predictive accuracy and model complexity. These attractors efficiently span the input subspace, enhancing generalization and the mutual information between hidden causes and observable effects. Furthermore, while random data presentation leads to symmetric and sparse couplings, sequential data fosters asymmetric couplings and non-equilibrium steady-state dynamics, offering a natural extension to conventional Boltzmann Machines. Our findings offer a unifying theory of self-organizing attractor networks, providing novel insights for AI and neuroscience.
... Neuromorphic computing has emerged as a compelling approach for ultra energy-efficient data processing, with potential power reductions of several orders of magnitude compared to conventional systems [1]. Drawing inspiration from the eventdriven computation and the high energy efficiency observed in biological neurons, neuromorphic circuits employ fundamentally analog structures and spike-based signaling, representing a significant departure from conventional Von Neumann architectures. ...
... Neuromorphic systems can be implemented using analog circuits that emulate continuous-time dynamics or digital ones that approximate them discretely. These systems typically offer a configurable interconnection fabric and exhibit high energy efficiency due to their event-driven, massively parallel architecture [1], [6]. Such features make them especially suitable for PHY-layer algorithms with inherent parallelism, such as those spanning subcarriers or symbols. ...
... In order to support symbol durations on the order of 5 µs while producing multiple spikes per symbol, the LIF neuron is configured with a relatively short membrane time constant, such as τ m = 0.5 µs. At this timescale, the neuron repeatedly charges and discharges its membrane capacitor C m up to a threshold voltage v th , with each spike event consuming energy approximately equal to 1 2 C m v 2 th , typically falling in the range of 1-10 pJ [9], [10]. Assuming an average spike count of 14 spikes per symbol when transmitting a "1" at high SNR as discussed in Sec. ...
Neuromorphic computing, inspired by biological neural systems, has emerged as a promising approach for ultra-energy-efficient data processing by leveraging analog neuron structures and spike-based computation. However, its application in communication systems remains largely unexplored, with existing efforts mainly focused on mapping isolated communication algorithms onto spiking networks, often accompanied by substantial, traditional computational overhead due to transformations required to adapt problems to the spiking paradigm. In this work, we take a fundamentally different route and, for the first time, propose a fully neuromorphic communication receiver by applying neuromorphic principles directly in the analog domain from the very start of the receiver processing chain. Specifically, we examine a simple transmission scenario: a BPSK receiver with repetition coding, and show that we can achieve joint detection and decoding entirely through spiking signals. Our approach demonstrates error-rate performance gains over conventional digital realizations with power consumption on the order of microwatts, comparable with a single very low-resolution Analog-to-Digital Converter (ADC) utilized in digital receivers. To maintain performance under varying noise conditions, we also introduce a novel noise-tracking mechanism that dynamically adjusts neural parameters during transmission. Finally, we discuss the key challenges and directions toward ultra-efficient neuromorphic transceivers.