Figure - available from: International Journal of Information Security
This content is subject to copyright. Terms and conditions apply.
Source publication
Due to its plug-and-play functionality and wide device support, the universal serial bus (USB) protocol has become one of the most widely used protocols. However, this widespread adoption has introduced a significant security concern: the implicit trust provided to USB devices, which has created a vast array of attack vectors. Malicious USB devices...
Similar publications
Seizures are caused by abnormally synchronous brain activity that can result in changes in muscle tone, such as twitching, stiffness, limpness, or rhythmic jerking. These behavioral manifestations are clear on visual inspection and the most widely used seizure scoring systems in preclinical models, such as the Racine scale in rodents, use these beh...
Citations
... The widespread use of machine learning (ML) in diverse fields such as cyber security [1], healthcare [2], and autonomous vehicles [3] make it an attractive target for adversaries. Machine learning models are susceptible to various types of adversarial attacks, typically classified as poisoning [4], evasion [5], backdoor [6], inversion [7] and inference [8,9] attacks. Some potential attacks are demonstrated in [10][11][12][13][14]. ...
... We are developing surrogate models to poison datasets because based on the assumptions of our threat model, given Sect. 4 ...
... where; X l c = X l c (4) where, f is the function of manipulating labels, X c is the clean data point, X c is the poisoned data point and l c is the label. ...
Poisoning attacks are a primary threat to machine learning (ML) models, aiming to compromise their performance and reliability by manipulating training datasets. This paper introduces a novel attack—outlier-oriented poisoning (OOP) attack, which manipulates labels of most distanced samples from the decision boundaries. To ascertain the severity of the OOP attack for different degrees (5–25%) of poisoning to conduct a detailed analysis, we analyzed variance, accuracy, precision, recall, f1-score, and false positive rate for chosen ML models. Benchmarking the OOP attack, we have analyzed key characteristics of multiclass machine learning algorithms and their sensitivity to poisoning attacks. Our analysis helps understand behaviour of multiclass models against data poisoning attacks and contributes to effective mitigation against such attacks. Utilizing three publicly available datasets: IRIS, MNIST, and ISIC, our analysis shows that KNN and GNB are the most affected algorithms with a decrease in accuracy of 22.81% and 56.07% for IRIS dataset with 15% poisoning. Whereas, for same poisoning level and dataset, Decision Trees and Random Forest are the most resilient algorithms with the least accuracy disruption (12.28% and 17.52%). We have also analyzed the correlation between number of dataset classes and the performance degradation of models. Our analysis highlighted that number of classes are inversely proportional to the performance degradation, specifically the decrease in accuracy of the models, which is normalized with increasing number of classes. Further, our analysis identified that imbalanced dataset distribution can aggravate the impact of poisoning for machine learning models.
... Other methods, like device and host fingerprinting, show promise in detecting USB attacks [24,25]. However, adversarial data poisoning attacks significantly impair the models' ability to detect USB attacks, rendering the defense mechanism ineffective [26][27][28]. Adversaries use adversarial data poisoning during training to degrade the performance of ML models by injecting adversarial instances [29][30][31]. Kumar et al. in [28] demonstrate a sophisticated attacker using an adversarial data poisoning technique to deceive defense mechanisms that use supervised learning-based models. ...
... Adversaries use adversarial data poisoning during training to degrade the performance of ML models by injecting adversarial instances [29][30][31]. Kumar et al. in [28] demonstrate a sophisticated attacker using an adversarial data poisoning technique to deceive defense mechanisms that use supervised learning-based models. The attack is designed by modifying the firmware of the attack device to mimic legitimate user behavior. ...
... However, the approach relies heavily on key-hold time, which limits its effectiveness. Prior research [28] demonstrates that RF-based models relying on complex keystroke dynamics can be bypassed using advanced evasion strategies. ...
The paper introduces a Universal Serial Bus (USB)-based defense framework, USB-GATE, which leverages a Generative Adversarial Network (GAN) and transformer-based embeddings to enhance the detection of adversarial keystroke injection attacks. USB-GATE uses a Wasserstein GAN with Gradient Penalty (WGAN-GP) to augment benign data. The framework combines benign data augmentation with multimodal transformer-based embeddings to improve the robustness of existing supervised Machine Learning (ML) models in detecting the attacks. The framework generates augmented benign data using WGAN-GP, establishing a robust baseline dataset. Subsequently, it leverages the Vision Transformer (ViT) component of Contrastive Language-Image Pre-training (CLIP) to generate embeddings that boost the performance of various supervised ML models in detecting attacks. Our evaluation highlights significant performance improvements, with the supervised ML model k-Nearest Neighbors (kNN) showing the maximum improvement, achieving a 17% boost in accuracy when the framework is applied. The Random Forest (RF) model achieves the best overall accuracy of 81.3%, marking a 5% improvement when using USB-GATE. Our results demonstrate the efficacy of USB-GATE in detecting adversarial attacks. This framework provides a promising solution for strengthening defenses. It is particularly effective against adversarial USB keystroke injection attacks.
... We examine 1 leading NLP articles to gather information regarding the datasets employed in adversarial attack and defense studies [4]. Figure 1 illustrates the proportion of datasets employed in the pertinent NLP tasks. ...
Advanced neural text classifiers have shown remarkable ability in the task of classification. The investigation illustrates that text classification models have an inherent vulnerability to adversarial texts, where a few words or characters are altered to create adversarial examples that misleads the machine into making incorrect predictions while preserving its intended meaning among human viewers. The present study introduces Inflect-Text, a novel approach for attacking text that works at the level of individual words in a situation where the inner workings of the system are unknown. The objective is to deceive a specific neural text classifier while following specified language limitations in a manner that makes the changes undetectable to humans. Extensive investigations are carried out to evaluate the viability of the proposed attack methodology on various often utilized frameworks, inclusive of Word-CNN, Bi-LSTM and three advanced transformer models, across two benchmark datasets: AG news and MR, which are commonly employed for text classification tasks. Experimental results show that the suggested attack architecture regularly outperforms conventional methods by achieving much higher attack success rates and generating better adversarial examples. The findings suggest that neural text classifiers can be bypassed, which could have substantial ramifications for existing policy approaches.
... Dynamic detection techniques, particularly those using supervised machine learning (ML)-based IDS, have shown promise by identifying threats as soon as a USB device becomes operational. However, current models struggle against sophisticated attacks that use adversarial data poisoning, where malicious data is introduced during training to mislead the ML model, resulting in critical misclassifications [7]. ...
... With a range of peripherals spanning from slower devices at 1.5 Mb/s to high-speed devices up to 40 Gb/s, USB has become the most widely used interface [36]. Figure 1 illustrates the USB communication between a device and an application (client) on a host system. The USB interface layer includes the host controller and root hub hardware, while the host controller driver (HCD) and USB system software are critical components of the host's USB infrastructure [7]. The physical layer manages bi-directional bit communication between the host and USB device. ...
... Attackers with sophisticated abilities can circumvent defense mechanisms that utilize supervised ML models to protect against adversarial USB keystroke injection attacks [7]. A thorough examination shows that by designing the attacks such that the attack samples closely resemble the distribution of benign data, the attacker can successfully evade the defense systems. ...
... The widespread use of machine learning in diverse fields such as cyber security [16], healthcare [34], and autonomous vehicles [49] make it an attractive target for adversaries. Machine learning models are susceptible to various types of adversarial attacks, typically classified as poisoning [14], evasion [10], backdoor [47], inversion [36] and inference [27], [6] attacks. Some potential attacks are demonstrated in [7,57,22,31,48]. ...
Poisoning attacks are a primary threat to machine learning models, aiming to compromise their performance and reliability by manipulating training datasets. This paper introduces a novel attack - Outlier-Oriented Poisoning (OOP) attack, which manipulates labels of most distanced samples from the decision boundaries. The paper also investigates the adverse impact of such attacks on different machine learning algorithms within a multiclass classification scenario, analyzing their variance and correlation between different poisoning levels and performance degradation. To ascertain the severity of the OOP attack for different degrees (5% - 25%) of poisoning, we analyzed variance, accuracy, precision, recall, f1-score, and false positive rate for chosen ML models.Benchmarking our OOP attack, we have analyzed key characteristics of multiclass machine learning algorithms and their sensitivity to poisoning attacks. Our experimentation used three publicly available datasets: IRIS, MNIST, and ISIC. Our analysis shows that KNN and GNB are the most affected algorithms with a decrease in accuracy of 22.81% and 56.07% while increasing false positive rate to 17.14% and 40.45% for IRIS dataset with 15% poisoning. Further, Decision Trees and Random Forest are the most resilient algorithms with the least accuracy disruption of 12.28% and 17.52% with 15% poisoning of the IRIS dataset. We have also analyzed the correlation between number of dataset classes and the performance degradation of models. Our analysis highlighted that number of classes are inversely proportional to the performance degradation, specifically the decrease in accuracy of the models, which is normalized with increasing number of classes. Further, our analysis identified that imbalanced dataset distribution can aggravate the impact of poisoning for machine learning models