Communication flow between a USB device and an application running on the host machine

Communication flow between a USB device and an application running on the host machine

Source publication
Article
Full-text available
Due to its plug-and-play functionality and wide device support, the universal serial bus (USB) protocol has become one of the most widely used protocols. However, this widespread adoption has introduced a significant security concern: the implicit trust provided to USB devices, which has created a vast array of attack vectors. Malicious USB devices...

Similar publications

Preprint
Full-text available
Seizures are caused by abnormally synchronous brain activity that can result in changes in muscle tone, such as twitching, stiffness, limpness, or rhythmic jerking. These behavioral manifestations are clear on visual inspection and the most widely used seizure scoring systems in preclinical models, such as the Racine scale in rodents, use these beh...

Citations

... We examine 1 leading NLP articles to gather information regarding the datasets employed in adversarial attack and defense studies [4]. Figure 1 illustrates the proportion of datasets employed in the pertinent NLP tasks. ...
Article
Full-text available
Advanced neural text classifiers have shown remarkable ability in the task of classification. The investigation illustrates that text classification models have an inherent vulnerability to adversarial texts, where a few words or characters are altered to create adversarial examples that misleads the machine into making incorrect predictions while preserving its intended meaning among human viewers. The present study introduces Inflect-Text, a novel approach for attacking text that works at the level of individual words in a situation where the inner workings of the system are unknown. The objective is to deceive a specific neural text classifier while following specified language limitations in a manner that makes the changes undetectable to humans. Extensive investigations are carried out to evaluate the viability of the proposed attack methodology on various often utilized frameworks, inclusive of Word-CNN, Bi-LSTM and three advanced transformer models, across two benchmark datasets: AG news and MR, which are commonly employed for text classification tasks. Experimental results show that the suggested attack architecture regularly outperforms conventional methods by achieving much higher attack success rates and generating better adversarial examples. The findings suggest that neural text classifiers can be bypassed, which could have substantial ramifications for existing policy approaches.
... The widespread use of machine learning in diverse fields such as cyber security [16], healthcare [34], and autonomous vehicles [49] make it an attractive target for adversaries. Machine learning models are susceptible to various types of adversarial attacks, typically classified as poisoning [14], evasion [10], backdoor [47], inversion [36] and inference [27], [6] attacks. Some potential attacks are demonstrated in [7,57,22,31,48]. ...
Preprint
Full-text available
Poisoning attacks are a primary threat to machine learning models, aiming to compromise their performance and reliability by manipulating training datasets. This paper introduces a novel attack - Outlier-Oriented Poisoning (OOP) attack, which manipulates labels of most distanced samples from the decision boundaries. The paper also investigates the adverse impact of such attacks on different machine learning algorithms within a multiclass classification scenario, analyzing their variance and correlation between different poisoning levels and performance degradation. To ascertain the severity of the OOP attack for different degrees (5% - 25%) of poisoning, we analyzed variance, accuracy, precision, recall, f1-score, and false positive rate for chosen ML models.Benchmarking our OOP attack, we have analyzed key characteristics of multiclass machine learning algorithms and their sensitivity to poisoning attacks. Our experimentation used three publicly available datasets: IRIS, MNIST, and ISIC. Our analysis shows that KNN and GNB are the most affected algorithms with a decrease in accuracy of 22.81% and 56.07% while increasing false positive rate to 17.14% and 40.45% for IRIS dataset with 15% poisoning. Further, Decision Trees and Random Forest are the most resilient algorithms with the least accuracy disruption of 12.28% and 17.52% with 15% poisoning of the IRIS dataset. We have also analyzed the correlation between number of dataset classes and the performance degradation of models. Our analysis highlighted that number of classes are inversely proportional to the performance degradation, specifically the decrease in accuracy of the models, which is normalized with increasing number of classes. Further, our analysis identified that imbalanced dataset distribution can aggravate the impact of poisoning for machine learning models