
Gangotree Chakma- Doctor of Philosophy
- Engineer at Micron Technology, Inc.
Gangotree Chakma
- Doctor of Philosophy
- Engineer at Micron Technology, Inc.
About
15
Publications
8,365
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
278
Citations
Introduction
Current institution
Publications
Publications (15)
Resistive Random Access Memory (ReRAM), a form of non-volatile memory, has been proposed as a Flash memory replacement. In addition, novel circuit architectures have been proposed that rely on newly discovered or predicted behavior of ReRAM. One such architecture is the memristive Dynamic Adaptive Neural Network Array, developed to emulate the func...
Memory and Central Processing Units (CPU) are the primary computing resources for any circuit simulation job.Speed, efficiency and performance of these jobs depends on how the resources in the server farm are leveraged and optimized.But depending on the users selection as well as simulators architecture these resources might be over or underutilize...
Resource constrained devices are the building blocks of the internet of things (IoT) era. Since the idea behind IoT is to develop an interconnected environment where the devices are tiny enough to operate with limited resources, several control systems have been built to maintain low energy and area consumption while operating as IoT edge devices....
Training deep learning networks is a difficult task due to computational complexity, and this is traditionally handled by simplifying network topology to enable parallel computation on graphical processing units (GPUs). However, the emergence of quantum devices allows reconsideration of complex topologies. We illustrate a particular network topolog...
Neuromorphic computing systems are alternatives to conventional microprocessors, often built from unconventional hardware. Designing and evaluating these systems requires multiple levels of simulation, from the device level to the circuit level to the system level. In this paper, we describe the system level simulator of a neuromorphic computing sy...
Neuromorphic computing is a non-von Neumann computer architecture for the post Moore’s law era of computing. Since a main focus of the post Moore’s law era is energy-efficient computing with fewer resources and less area, neuromorphic computing contributes effectively in this research. In this paper we present a memristive neuromorphic system for i...
Neuromorphic computing is a promising post-Moore's law era technology. A wide variety of neuromorphic computer (NC) architectures have emerged in recent years, ranging from traditional fully digital CMOS to nanoscale implementations with novel, beyond CMOS components. There are already major questions associated with how we are going to program and...
In this work, we apply a spiking neural network model and an associated memristive neuromorphic implementation to an application in classifying temporal scientific data. We demonstrate that the spiking neural network model achieves comparable results to a previously reported convolutional neural network model, with significantly fewer neurons and s...
Memristors are widely leveraged in neuromorphic systems for constructing synapses. Resistance switching characteristics of memristors enable online learning in synapses. This paper addresses a fundamental issue associated with the design of synapses with memristors whose switching rates in either direction differ up to two orders of magnitude. A tw...
Current Deep Learning approaches have been very successful using convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers. Three limitations of this approach are: 1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; 2) the networks are ma...
Current Deep Learning approaches have been very successful using convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers. Three limitations of this approach are: 1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; 2) the networks are ma...