Qiyuan An’s research while affiliated with Virginia Tech and other places

What is this page?


This page lists works of an author who doesn't have a ResearchGate profile or hasn't added the works to their profile yet. It is automatically generated from public (personal) data to further our legitimate goal of comprehensive and accurate scientific recordkeeping. If you are this author and want this page removed, please let us know.

Publications (7)


Simultaneous Relevance and Diversity: A New Recommendation Inference Approach
  • Preprint

September 2020

·

45 Reads

Yifang Liu

·

Zhentao Xu

·

Qiyuan An

·

[...]

·

Trevor Hastie

Relevance and diversity are both important to the success of recommender systems, as they help users to discover from a large pool of items a compact set of candidates that are not only interesting but exploratory as well. The challenge is that relevance and diversity usually act as two competing objectives in conventional recommender systems, which necessities the classic trade-off between exploitation and exploration. Traditionally, higher diversity often means sacrifice on relevance and vice versa. We propose a new approach, heterogeneous inference, which extends the general collaborative filtering (CF) by introducing a new way of CF inference, negative-to-positive. Heterogeneous inference achieves divergent relevance, where relevance and diversity support each other as two collaborating objectives in one recommendation model, and where recommendation diversity is an inherent outcome of the relevance inference process. Benefiting from its succinctness and flexibility, our approach is applicable to a wide range of recommendation scenarios/use-cases at various sophistication levels. Our analysis and experiments on public datasets and real-world production data show that our approach outperforms existing methods on relevance and diversity simultaneously.


A unified information perceptron using deep reservoir computing

July 2020

·

37 Reads

·

8 Citations

Computers & Electrical Engineering

The delay feedback reservoir, as a branch of reservoir computing, has attracted a wide range of research interests because of its training efficiency and its simplicity for hardware implementation. However, its potential for processing various kinds of data, like sequential and matrix data, has not been fully explored. In this paper, we present a unified information processing structure by fusing the convolutional or fully connected neural network with the delay feedback reservoir into a hybrid neural network model to accomplish the comprehensive information processing goal. Our experimental results show that our methodology achieves high accuracy in both image classification and speech recognition, yielding 99.03% testing accuracy on the handwritten digits dataset (MNIST) and 97.3% on Spoken Digits Command Dataset (SDCD).



A Training-Efficient Hybrid-Structured Deep Neural Network With Reconfigurable Memristive Synapses

October 2019

·

34 Reads

·

33 Citations

IEEE Transactions on Very Large Scale Integration (VLSI) Systems

The continued success in the development of neuromorphic computing has immensely pushed today’s artificial intelligence forward. Deep neural networks (DNNs), a brainlike machine learning architecture, rely on the intensive vector–matrix computation with extraordinary performance in data-extensive applications. Recently, the nonvolatile memory (NVM) crossbar array uniquely has unvailed its intrinsic vector–matrix computation with parallel computing capability in neural network designs. In this article, we design and fabricate a hybrid-structured DNN (hybrid-DNN), combining both depth-in-space (spatial) and depth-in-time (temporal) deep learning characteristics. Our hybrid-DNN employs memristive synapses working in a hierarchical information processing fashion and delay-based spiking neural network (SNN) modules as the readout layer. Our fabricated prototype in 130-nm CMOS technology along with experimental results demonstrates its high computing parallelism and energy efficiency with low hardware implementation cost, making the designed system a candidate for low-power embedded applications. From chaotic time-series forecasting benchmarks, our hybrid-DNN exhibits 1.16×1.16\times 13.77×13.77\times reduction on the prediction error compared to the state-of-the-art DNN designs. Moreover, our hybrid-DNN records 99.03% and 99.63% testing accuracy on the handwritten digit classification and the spoken digit recognition tasks, respectively.


Energy Efficient Temporal Spatial Information Processing Circuits Based on STDP and Spike Iteration

October 2019

·

18 Reads

·

8 Citations

IEEE Transactions on Circuits and Systems II: Express Briefs

In this brief, we propose a novel energy-efficient temporal-spatial information processing circuit that serves as the signal pre-processing interface for spiking neural networks. In order to transform sensory information into a highly efficient neural-like spike train, an iteration encoding scheme based temporal-spatial inter-spike interval (ISI) encoder is designed and analyzed. Moreover, a decoder is designed with the spiketiming-dependent plasticity (STDP) principle, which performs well in information recovery. The prototype of the proposed ISI encoder is presented, with a 3-interval encoder through the standard 180nm CMOS technology. The proposed ISI encoder could operate in 1MHz sampling frequency, and it occupies merely 0.647mm 2 die area while consuming as low as 1.63uW/neuron power. A multi-level ISI decoder with spike width adaptation is also designed and evaluated through the CIFAR10 image dataset.


Figure 2: Cellular level associative memory model with a memristor as the electronic synapse
Figure 3: Proposed large-scale associative neuromorphic architecture partitioned into two pathways constructed by two ANNs
Figure 12: Positive and negative output spiking signals of an SIEN with 700 mV square wave signal as an input stimulus.
Figure 13: (a) Characteristics curve of SIEN outputs (b) Distribution of image and speech recognition scores on digits using the datasets: MNIST and Spoken Digit Commands Dataset
Figure 14: Novel memristor weight updating scheme

+1

Realizing Behavior Level Associative Memory Learning Through Three-Dimensional Memristor-Based Neuromorphic Circuits
  • Article
  • Full-text available

July 2019

·

313 Reads

·

41 Citations

IEEE Transactions on Emerging Topics in Computational Intelligence

Associative memory is a widespread self-learning method in biological livings, which enables the nervous system to remember the relationship between two concurrent events. The significance of rebuilding associative memory at a behavior level is not only to reveal a way of designing a brain-like self-learning neuromorphic system but also to explore a method of comprehending the learning mechanism of a nervous system. In this paper, an associative memory learning at a behavior level is realized that successfully associates concurrent visual and auditory information together (pronunciation and image of digits). The task is achieved by associating the large-scale artificial neural networks (ANNs) together instead of relating multiple analog signals. In this way, the information carried and preprocessed by these ANNs can be associated. A neuron has been designed, named signal intensity encoding neurons (SIENs), to encode the output data of the ANNs into the magnitude and frequency of the analog spiking signals. Then, the spiking signals are correlated together with an associative neural network, implemented with a three-dimensional (3-D) memristor array. Furthermore, the selector devices in the traditional memristor cells limiting the design area have been avoided by our novel memristor weight updating scheme. With the novel SIENs, the 3-D memristive synapse, and the proposed memristor weight updating scheme, the simulation results demonstrate that our proposed associative memory learning method and the corresponding circuit implementations successfully associate the pronunciation and image of digits together, which mimics a human-like associative memory learning behavior.

Download

Deep-DFR: A Memristive Deep Delayed Feedback Reservoir Computing System with Hybrid Neural Network Topology

June 2019

·

67 Reads

·

14 Citations

Deep neural networks (DNNs), the brain-like machine learning architecture, have gained immense success in data-extensive applications. In this work, a hybrid structured deep delayed feedback reservoir (Deep-DFR) computing model is proposed and fabricated. Our Deep-DFR employs memristive synapses working in a hierarchical information processing fashion with DFR modules as the readout layer, leading our proposed deep learning structure to be both depth-in-space and depth-in-time. Our fabricated prototype along with experimental results demonstrate its high energy efficiency with low hardware implementation cost. With applications on the image classification, MNIST and SVHN, our Deep-DFR yields a 1.26~7.69X reduction on the testing error compared to state-of-the-art DNN designs.

Citations (5)


... Echo state network (ESN) [4], as a new recurrent neural network (RNN) [5], was firstly proposed by Jaeger to solve the problems of vanishing gradient and explod-ing gradient [6] during the training of traditional RNN. The hidden layer of ESN, also known as the reservoir, is a randomly generated sparse network with many neurons. ...

Reference:

Echo State Network Based on Improved Knowledge Distillation for Edge Intelligence
A unified information perceptron using deep reservoir computing
  • Citing Article
  • July 2020

Computers & Electrical Engineering

... As shown in Fig. 2, the current used nonlinear nodes of time-delayed RC include electronic devices, photonic devices [19,35], spintronic devices [13,24], and biological devices [25]. Among them, electronic devices encompass field programmable gate array (FPGA) [2,12], very large-scale integration circuit (VLSI) [5], and memristors [8,26,31,34,37]. As a nano-scale, lowpower electronic device, memristor exhibits dynamic nonlinearity and fading memory characteristics under certain conditions, which meets the critical requirements for processing sequential signals in the reservoir layer of time-delayed RC [24]. ...

A Training-Efficient Hybrid-Structured Deep Neural Network With Reconfigurable Memristive Synapses
  • Citing Article
  • October 2019

IEEE Transactions on Very Large Scale Integration (VLSI) Systems

... To overcome these drawbacks, other encoding schemes that can use other properties of spike trains are proposed for the spike encoding process. Temporal patterns, which mean the different timings of spikes in the spike train, are the most used aspects for encoding [29]. Therefore, a large category of neural code is called temporal encoding, which employs both the spike number and the temporal pattern of the spike train for stimulus. ...

Energy Efficient Temporal Spatial Information Processing Circuits Based on STDP and Spike Iteration
  • Citing Article
  • October 2019

IEEE Transactions on Circuits and Systems II: Express Briefs

... Several studies have investigated associative learning [7,[9][10][11][12][13][14][15] but often face limitations such as small-scale neural networks, a preference for simulations over realworld experiments, and a lack of real-world robotic deployment for testing [11][12][13][14][15]. To address these limitations, we have taken a novel approach. ...

Realizing Behavior Level Associative Memory Learning Through Three-Dimensional Memristor-Based Neuromorphic Circuits

IEEE Transactions on Emerging Topics in Computational Intelligence

... Furthermore, neuromorphic systems use sparse and event-based computation, meaning that only a small percentage of the available computing resources are active for a given task, and they are only activated and consuming power as needed in response to present events. Neuromorphic computing attempts to exploit these useful properties by modeling the architecture, neuron and synaptic cells, and the way of learning observed in the brain, enabling a new era of computers and AI [32]. ...

Deep-DFR: A Memristive Deep Delayed Feedback Reservoir Computing System with Hybrid Neural Network Topology
  • Citing Conference Paper
  • June 2019