Hongyu AnVirginia Tech | VT · Department of Electrical and Computer Engineering
Hongyu An
Doctor of Engineering
About
20
Publications
32,605
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
273
Citations
Introduction
My research interests include but are not limited to the following: Neuromorphic Electronic Circuits Design for Brain-Inspired Computing System, Three Dimensional Integrated Circuits Design and Analysis, Artificial Intelligence, Machine Learning, and Cognitive Computing.
More information can be found at:
http://an-hongyu.github.io/vt
Skills and Expertise
Publications
Publications (20)
The information communicating among neurons in Spiking Neural Networks (SNNs) is represented as spiking signals. The outstanding energy efficiency of SNNs stems from the minimal computational cost on the nonlinear calculations of the neurons and the communicating power between them. In this paper, we present a three-dimensional (3D) Memristive Spik...
Human brains can complete numerous intelligent tasks, such as pattern recognition, reasoning, control and movement, with remarkable energy efficiency (20 W). In contrast, a typical computer only recognizes 1,000 different objects but consumes about 250 W power [1]. This performance significant differences stem from the intrinsic different structure...
Three-dimensional integrated circuits (3D-ICs) is a cutting-edge design methodology of placing the circuitry vertically aiming for a high-speed and energy-efficient system with the smallest design area. In this article, a novel 3-D neuromorphic system is proposed and analyzed, which utilizes the fabricated two-layer memristor as the electronic syna...
Deep Neural Networks (DNNs), a brain-inspired learning methodology, requires tremendous data for training before performing inference tasks. The recent studies demonstrate a strong positive correlation between the inference accuracy and the size of the DNNs and datasets, which leads to an inevitable demand for large DNNs. However, conventional memo...
Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore...
Neuromorphic computing, an emerging non-von Neumann computing mimicking the physical structure and signal processing technique of mammalian brains, potentially achieves the same level of computing and power efficiencies of mammalian brains. This chapter will discuss the state-of-the-art research trend on neuromorphic computing with memristors as el...
To accelerate the training efficiency of neural network-based machine learning, a memristor-based nonlinear computing module is designed and analyzed. Nonlinear computing operation is widely needed in neuromorphic computing and deep learning. The proposed nonlinear computing module can potentially realize a monotonic nonlinear function by successiv...
Because of fabrication compatibility to current semiconductor technology, three-dimensional integrated circuits (3D-ICs) offer promising near-term solutions for maintaining Moore's Law. 3D-ICs proffer high system speeds, massively parallel processing, low power consumption, and their high densities result in small footprints. In this paper, a novel...
An advanced neurophysiological computing system can incorporate a 3D integration system composed of emerging nano-scale devices to provide massive parallelism having high speed, low cost, and energy efficient hardware implementation. Due to process technology constraints, a certain amount of redundant Through Silicon Vias (TSVs) and dummy TSVs are...
Neuromorphic computing based on three-dimensional inetgraed circuits (3D-NCs) offers a novel hardware implementation of neuromorphic computing, and provides high device density, massively parallel signal processing capability, low power consumption, and direct analog signal processing capability. In this paper, by replacing conventional CPUs based...
Temporal Attention Mechanism is one of the oldest and most challenges in neuroscience, artificial intelligence, and neuromorphic computing. The significances of investigation on the temporal attention mechanism is not only to build a more power efficient neuromorphic computing system but also to explore the knowledge of the perception system of hum...
Neurophysiological architecture using 3D integration technology offers a high device interconnection density as well as fast and energy efficient links among the neuron and synapses layers. In this paper, we propose to reconfigure the Through-Silicon-Vias (TSVs) to serve as the neuronal membrane capacitors that map the membrane electrical activitie...