
Adnan Siraj Rakin- Doctor of Philosophy
- Assistant Professor at Binghamton University
Adnan Siraj Rakin
- Doctor of Philosophy
- Assistant Professor at Binghamton University
Website: https://www.adnansirajrakin.com/
About
66
Publications
23,839
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
1,795
Citations
Introduction
I am an Assistant Professor in the Department of Computer Science at Binghamton University (SUNY). Previously, I completed my Ph.D. in Computer Engineering at Arizona State University (ASU), advised by Dr. Deliang Fan. Before joining ASU, I completed my B.Sc. degree in Electrical and Electronic Engineering (EEE) from the Bangladesh University of Engineering and Technology in 2016. I have also completed my M.Sc. degree in Computer Engineering from ASU in 2021. My research interest focuses on AI s
Current institution
Additional affiliations
May 2020 - August 2020
August 2019 - present
August 2016 - July 2017
Publications
Publications (66)
Recent development in the field of Deep Learning have exposed the underlying vulnerability of Deep Neural Network (DNN) against adversarial examples. In image classification, an adversarial example is a carefully modified image that is visually imperceptible to the original image but can cause DNN model to misclassify it. Training the network with...
Security of modern Deep Neural Networks (DNNs) is under severe scrutiny as the deployment of these models become widespread in many intelligence-based applications. Most recently, DNNs are attacked through Trojan which can effectively infect the model during the training phase and get activated only through specific input patterns (i.e, trigger) du...
Security of machine learning is increasingly becoming a major concern due to the ubiquitous deployment of deep learning in many security-sensitive domains. Many prior studies have shown external attacks such as adversarial examples that tamper with the integrity of DNNs using maliciously crafted inputs. However, the security implication of internal...
Robust machine learning formulations have emerged to address the prevalent vulnerability of deep neural networks to adversarial examples. Our work draws the connection between optimal robust learning and the privacy-utility tradeoff problem, which is a generalization of the rate-distortion problem. The saddle point of the game between a robust clas...
Traditional Deep Neural Network (DNN) security is mostly related to the well-known adversarial input example attack. Recently, another dimension of adversarial attack, namely, attack on DNN weight parameters, has been shown to be very powerful. As a representative one, the Bit-Flip-based adversarial weight Attack (BFA) injects an extremely small am...
Robots need task planning methods to achieve goals that require more than individual actions. Recently, large language models (LLMs) have demonstrated impressive performance in task planning. LLMs can generate a step-by-step solution using a description of actions and the goal. Despite the successes in LLM-based task planning, there is limited rese...
Recent advancements in side-channel attacks have revealed the vulnerability of modern Deep Neural Networks (DNNs) to malicious adversarial weight attacks. The well-studied RowHammer attack has effectively compromised DNN performance by inducing precise and deterministic bit-flips in the main memory (e.g., DRAM). Similarly, RowPress has emerged as a...
RowHammer stands out as a prominent example, potentially the pioneering one, showcasing how a failure mechanism at the circuit level can give rise to a significant and pervasive security vulnerability within systems. Prior research has approached RowHammer attacks within a static threat model framework. Nonetheless, it warrants consideration within...
Studies on backdoor attacks in recent years suggest that an adversary can compromise the integrity of a deep neural network (DNN) by manipulating a small set of training samples. Our analysis shows that such manipulation can make the backdoor model converge to a bad local minima, i.e., sharper minima as compared to a benign model. Intuitively, the...
This brief studies the impact of escalating DRAM RowHammer attack distance to potentially bypass well-developed counter-based defenses leveraging a multi-sided fault injection mechanism. By conducting systematic experimentation on 128 commercial DDR4 products, our results challenge recent research findings, showing that cells positioned at a greate...
We study three different deep-learning methods for inverse design of photonic devices. Treating the device structures as images, a big dataset of the structures and their characteristics are used for building the models. The models are then asked to design devices with the desired characteristics.
Federated Learning (FL) is a popular collaborative learning scheme involving multiple clients and a server. FL focuses on protecting clients' data but turns out to be highly vulnerable to Intellectual Property (IP) threats. Since FL periodically collects and distributes the model parameters, a free-rider can download the latest model and thus steal...
This work aims to tackle Model Inversion (MI) attack on Split Federated Learning (SFL). SFL is a recent distributed training scheme where multiple clients send intermediate activations (i.e., feature map), instead of raw data, to a central server. While such a scheme helps reduce the computational load at the client end, it opens itself to reconstr...
We present a novel deep neural network (DNN) training scheme and resistive RAM (RRAM) in-memory computing (IMC) hardware evaluation towards achieving high accuracy against RRAM device/array variations and enhanced robustness against adversarial input attacks. We present improved IMC inference accuracy results evaluated on state-of-the-art DNNs incl...
We propose hardware noise-aware deep neural network (DNN) training to largely improve the DNN inference accuracy of in-memory computing (IMC) hardware. During DNN training, we perform noise injection at the partial sum level, which matches with the crossbar structure of IMC hardware, and the injected noise data is directly based on measurements of...
Recent advancements of Deep Neural Networks (DNNs) have seen widespread deployment in multiple security-sensitive domains. The need of resource-intensive training and use of valuable domain-specific training data have made these models a top intellectual property (IP) for model owners. One of the major threats to the DNN privacy is model extraction...
Adversarial example attacks have recently exposed the severe vulnerability of neural network models. However, most of the existing attacks require some form of target model information (i.e., weights/model inquiry/architecture) to improve the efficacy of the attack. We leverage the information-theoretic connections between robust learning and gener...
Neural network stealing attacks have posed grave threats to neural network model deployment. Such attacks can be launched by extracting neural architecture information, such as layer sequence and dimension parameters, through leaky side-channels. To mitigate such attacks, we propose NeurObfuscator, a full-stack obfuscation tool to obfuscate the neu...
Recently developed adversarial weight attack, a.k.a. bit-flip attack (BFA), has shown enormous success in compromising Deep Neural Network (DNN) performance with an extremely small amount of model parameter perturbation. To defend against this threat, we propose RA-BNN that adopts a complete binary (i.e., for both weights and activation) neural net...
Adversarial attacks on Neural Network weights, such as the progressive bit-flip attack (PBFA), can cause a catastrophic degradation in accuracy by flipping a very small number of bits. Furthermore, PBFA can be conducted at run time on the weights stored in DRAM main memory. In this work, we propose RADAR, a Run-time adversarial weight Attack Detect...
Nowadays, one practical limitation of deep neural network (DNN) is its high degree of specialization to a single task or domain (e.g. one visual domain). It motivates researchers to develop algorithms that can adapt DNN model to multiple domains sequentially, meanwhile still performing well on the past domains, which is known as multi-domain learni...
The wide deployment of Deep Neural Networks (DNN) in high-performance cloud computing platforms has emerged field-programmable gate arrays (FPGA) as a popular choice of accelerator to boost performance due to its hardware reprogramming flexibility. To improve the efficiency of hardware resource utilization, growing efforts have been invested in FPG...
Deep Neural Network (DNN) attacks have mostly been conducted through adversarial input example generation. Recent work on adversarial attack of DNNweights, especially, Bit-Flip based adversarial weight Attack (BFA) has proved to be very powerful. BFA is an un-targeted attack that can classify all inputs into a random output class by flipping a very...
Renewable energy sources are becoming a popular choice of energy, due to their sundry advantages and more convenient environmental impacts. Wind certainly emerges as one of the most plausible energy sources in modern power generation. However, large-scale wind energy is associated with fluctuations in voltage and power due to its intermittent natur...
In this work, we propose a multiplication-less binarized depthwise-separable convolution neural network, called BD-Net. BD-Net is designed to use binarized depthwise separable convolution block as the drop-in replacement of conventional spatial-convolution in deep convolution neural network (DNN). In BD-Net, the computation-expensive convolution op...
Deep Neural Network (DNN) trained by the gradient descent method is known to be vulnerable to maliciously perturbed adversarial input, aka. adversarial attack. As one of the countermeasures against adversarial attack, increasing the model capacity for DNN robustness enhancement was discussed and reported as an effective approach by many recent work...
Several important security issues of Deep Neural Network (DNN) have been raised recently associated with different applications and components. The most widely investigated security concern of DNN is from its malicious input, a.k.a adversarial example. Nevertheless, the security challenge of DNN's parameters is not well explored yet. In this work,...
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks. To this end, many defense approaches that attempt to improve the robustness of DNNs have been proposed. In a separate and yet related area, recent works have explored to quantize neural network weights and activation functions into low bit-width to com...
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks. To this end, many defense approaches that attempt to improve the robustness of DNNs have been proposed. In a separate and yet related area, recent works have explored to quantize neural network weights and activation functions into low bit-width to com...
In this paper, an energy-efficient and high-speed comparator-based processing-in-memory accelerator (CMP-PIM) is proposed to efficiently execute a novel hardware-oriented comparator-based deep neural network called CMPNET. Inspired by local binary pattern feature extraction method combined with depthwise separable convolution, we first modify the e...
A novel fiber design based on hexagonal shaped holes incorporated within the core of a Kagome lattice photonic crystal fiber (PCF) is presented. The modal properties of the proposed fiber are evaluated by using a finite element method (FEM) with a perfectly matched layer as boundary condition. Simulation results exhibit an ultra-low effective mater...
Deep learning algorithms and networks are vulnerable to perturbed inputs which are known as the adversarial attack. Many defense methodologies have been investigated to defend such adversarial attack. In this work, we propose a novel methodology to defend the existing powerful attack model. Such attack models have achieved record success against MN...
This paper represents a method to extract guitar chords from a given audio file using a probabilistic
approach called Maximum likelihood estimation. The audio file is split into smaller clips and then it is
transformed from time domain into frequency domain using Fourier Transformation. There are
multiple known frequencies of musical notes we denot...
Double Walled Carbon Nanotube has shown remarkable electrical and mechanical properties.
However, their characterization has been a great challenge. Different techniques previously existed
for finding the chirality of carbon nanotube. But implementing those techniques for double walled
carbon nanotube represents various challenges due to interactio...
Power generation is one of the major concerns for
the developing countries around the world with their limited
energy sources. Bangladesh without any doubt is faced with the
same set of challenges as the energy sources continue to deplete.
However, Renewable energy sources continue to be the most
easily available and suitable alternative. As a numb...
In this letter, we present a novel slotted core fiber incorporating a slotted cladding for the terahertz band. The modal properties of the designed fiber are numerically investigated based on an efficient finite element method. Simulation results of the fiber exhibit both a low material absorption loss of 0.0103-0.0145 cm
<sup xmlns:mml="http://www...
Many Software have been made to predict the optical transition energy of Single Walled Carbon Nanotube. Predicting the Radial Breathing Mode frequency for Double Walled Carbon Nanotube has been really tough due to inter tube interaction. Experimental values show clear indication that these frequencies and Transition energies depends heavily on inte...
The main objective of this work is to obtain the Chirality of inner and outer tube of double walled carbon nanotube successfully for the first time taking the interaction effect of the walls of Double Walled Carbon Nanotubes (DWNT) into account. Once the diameter is obtained from the Radial Breathing Mode (RBM) Frequency of Resonant Raman Spectrosc...
Many experiments of resonant Raman spectroscopy have been carried out to successfully
assign the radial breathing mode frequency of the inner and outer tube of double walled
carbon nanotube. Experimental values show clear indication that these frequencies depend
heavily on inter tube interaction. All the previous efforts to establish a relation bet...
Energy conversion efficiency is a major issue for photovoltaic cells. One of the main challenges of modern day research is to improve the efficiency level of photovoltaic devices by introducing new materials and advanced concepts. The target is to reach a high efficiency level within affordable cost, which will lead to a mass generation of electric...