Gangotree Chakma

Gangotree Chakma
  • Doctor of Philosophy
  • Engineer at Micron Technology, Inc.

About

15
Publications
8,365
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
278
Citations
Current institution
Micron Technology, Inc.
Current position
  • Engineer
Education
August 2015 - July 2019
University of Tennessee at Knoxville
Field of study
  • Electrical Engineering
March 2009 - May 2014
Bangladesh University of Engineering and Technology
Field of study
  • Electrical and Electronic Engineering

Publications

Publications (15)
Article
Resistive Random Access Memory (ReRAM), a form of non-volatile memory, has been proposed as a Flash memory replacement. In addition, novel circuit architectures have been proposed that rely on newly discovered or predicted behavior of ReRAM. One such architecture is the memristive Dynamic Adaptive Neural Network Array, developed to emulate the func...
Conference Paper
Full-text available
Memory and Central Processing Units (CPU) are the primary computing resources for any circuit simulation job.Speed, efficiency and performance of these jobs depends on how the resources in the server farm are leveraged and optimized.But depending on the users selection as well as simulators architecture these resources might be over or underutilize...
Conference Paper
Full-text available
Resource constrained devices are the building blocks of the internet of things (IoT) era. Since the idea behind IoT is to develop an interconnected environment where the devices are tiny enough to operate with limited resources, several control systems have been built to maintain low energy and area consumption while operating as IoT edge devices....
Article
Full-text available
Training deep learning networks is a difficult task due to computational complexity, and this is traditionally handled by simplifying network topology to enable parallel computation on graphical processing units (GPUs). However, the emergence of quantum devices allows reconsideration of complex topologies. We illustrate a particular network topolog...
Conference Paper
Full-text available
Neuromorphic computing systems are alternatives to conventional microprocessors, often built from unconventional hardware. Designing and evaluating these systems requires multiple levels of simulation, from the device level to the circuit level to the system level. In this paper, we describe the system level simulator of a neuromorphic computing sy...
Article
Full-text available
Neuromorphic computing is a non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. In this paper we present a memristive neuromorphic system for i...
Conference Paper
Full-text available
Neuromorphic computing is a promising post-Moore's law era technology. A wide variety of neuromorphic computer (NC) architectures have emerged in recent years, ranging from traditional fully digital CMOS to nanoscale implementations with novel, beyond CMOS components. There are already major questions associated with how we are going to program and...
Conference Paper
In this work, we apply a spiking neural network model and an associated memristive neuromorphic implementation to an application in classifying temporal scientific data. We demonstrate that the spiking neural network model achieves comparable results to a previously reported convolutional neural network model, with significantly fewer neurons and s...
Conference Paper
Full-text available
Memristors are widely leveraged in neuromorphic systems for constructing synapses. Resistance switching characteristics of memristors enable online learning in synapses. This paper addresses a fundamental issue associated with the design of synapses with memristors whose switching rates in either direction differ up to two orders of magnitude. A tw...
Article
Full-text available
Current Deep Learning approaches have been very successful using convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers. Three limitations of this approach are: 1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; 2) the networks are ma...
Preprint
Current Deep Learning approaches have been very successful using convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers. Three limitations of this approach are: 1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; 2) the networks are ma...

Network

Cited By