Ryan Sheatsley's research while affiliated with Pennsylvania State University and other places

Publications (26)

Preprint
Full-text available
Public clouds provide impressive capability through resource sharing. However, recent works have shown that the reuse of IP addresses can allow adversaries to exploit the latent configurations left by previous tenants. In this work, we perform a comprehensive analysis of the effect of cloud IP address allocation on exploitation of latent configurat...
Preprint
Adversarial examples, inputs designed to induce worst-case behavior in machine learning models, have been extensively studied over the past decade. Yet, our understanding of this phenomenon stems from a rather fragmented pool of knowledge; at present, there are a handful of attacks, each with disparate assumptions in threat models and incomparable...
Preprint
Full-text available
Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) ac...
Article
Gamma-ray Localization Aided with Machine-learning (GLAM) utilizes an array of four rectangular NaI(Tl) detectors, and data processed with a k-nearest neighbor machine learning model, to predict the location of gamma-ray sources in stationary scenarios. This work demonstrates GLAM capabilities to predict real source locations when trained purely on...
Preprint
Planning algorithms are used in computational systems to direct autonomous behavior. In a canonical application, for example, planning for autonomous vehicles is used to automate the static or continuous planning towards performance, resource management, or functional goals (e.g., arriving at the destination, managing fuel fuel consumption). Existi...
Preprint
Full-text available
Public clouds provide scalable and cost-efficient computing through resource sharing. However, moving from traditional on-premises service management to clouds introduces new challenges; failure to correctly provision, maintain, or decommission elastic services can lead to functional failure and vulnerability to attack. In this paper, we explore a...
Preprint
Geomagnetic storms, disturbances of Earth's magnetosphere caused by masses of charged particles being emitted from the Sun, are an uncontrollable threat to modern technology. Notably, they have the potential to damage satellites and cause instability in power grids on Earth, among other disasters. They result from high sun activity, which are induc...
Article
Full-text available
Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) ac...
Preprint
Network intrusion detection systems (NIDS) are an essential defense for computer networks and the hosts within them. Machine learning (ML) nowadays predominantly serves as the basis for NIDS decision making, where models are tuned to reduce false alarms, increase detection rates, and detect known and unknown attacks. At the same time, ML models hav...
Preprint
Full-text available
Machine Learning is becoming a pivotal aspect of many systems today, offering newfound performance on classification and prediction tasks, but this rapid integration also comes with new unforeseen vulnerabilities. To harden these systems the ever-growing field of Adversarial Machine Learning has proposed new attack and defense mechanisms. However,...
Preprint
One of the principal uses of physical-space sensors in public safety applications is the detection of unsafe conditions (e.g., release of poisonous gases, weapons in airports, tainted food). However, current detection methods in these applications are often costly, slow to use, and can be inaccurate in complex, changing, or new environments. In thi...
Article
Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition. In many threat models, the adversary exploits the unconstrained nature of images–t...
Preprint
Full-text available
Millions of consumers depend on smart camera systems to remotely monitor their homes and businesses. However, the architecture and design of popular commercial systems require users to relinquish control of their data to untrusted third parties, such as service providers (e.g., the cloud). Third parties therefore can (and in some instances have) ac...
Chapter
Machine learning (ML) is fundamentally changing our way of life with the recent availability of high computational power and big data. Emerging ML‐based techniques of network intrusion detection systems (NIDS) can detect complex cyberattacks, undetectable by conventional techniques. In this chapter, we evaluate the threat of a generative adversaria...
Preprint
Machine learning is vulnerable to adversarial examples-inputs designed to cause models to perform poorly. However, it is unclear if adversarial examples represent realistic inputs in the modeled domains. Diverse domains such as networks and phishing have domain constraints-complex relationships between features that an adversary must satisfy for an...
Article
One of the principal uses of physical-space sensors in public safety applications is the detection of unsafe conditions (e.g., release of poisonous gases, weapons in airports, tainted food). However, current detection methods in these applications are often costly, slow to use, and can be inaccurate in complex, changing, or new environments. In thi...
Preprint
Full-text available
Machine learning algorithms have been shown to be vulnerable to adversarial manipulation through systematic modification of inputs (e.g., adversarial examples) in domains such as image recognition. Under the default threat model, the adversary exploits the unconstrained nature of images; each feature (pixel) is fully under control of the adversary....
Chapter
Smartphone users are offered a plethora of applications providing services, such as games and entertainment. In 2018, 94% of applications on Google Play were advertised as “free”. However, many of these applications obtain undefined amounts of personal information from unaware users. In this paper, we introduce transiency: a privacy-enhancing featu...
Conference Paper
Full-text available
Data sharing among partners---users, companies, organizations---is crucial for the advancement of collaborative machine learning in many domains such as healthcare, finance, and security. Sharing through secure computation and other means allow these partners to perform privacy-preserving computations on their private data in controlled ways. Howev...
Conference Paper
Full-text available
For well over a quarter century, detection systems have been driven by models learned from input features collected from real or simulated environments. An artifact (e.g., network event, potential malware sample, suspicious email) is deemed malicious or non-malicious based on its similarity to the learned model at runtime. However, the training of...
Article
Full-text available
Data sharing among partners---users, organizations, companies---is crucial for the advancement of data analytics in many domains. Sharing through secure computation and differential privacy allows these partners to perform private computations on their sensitive data in controlled ways. However, in reality, there exist complex relationships among m...

Citations

... One component that enables this architecture is the reuse of scarce IPv4 addresses across tenants as services scale. Recent works [26], [10], [29], however, have shown that this practice exposes new security risks as malicious tenants exploit latent configuration created by prior users of an address. Thus, cloud providers are motivated to manage their IP space such that adversaries cannot easily discover a large number of IP addresses and exploit prior tenants. ...
... In future work, we intend to generate more realistic adversarial attacks that project more easily into the problem space. To do so, we will follow some recommendations found in the literature, [85][86][87], namely (i) restrict the space of features to be perturbed, i.e., avoid perturbing non-differentiable features so that the transformation is reversible, and the features directly related to the functionality of the flow so as not to impact it, (ii) perform small amplitude perturbations and check that the values of the modified features remain valid (domain constraints), and (iii) analyze the consistency of the values taken by the correlated features. ...
... Erdemir et al. [44] fooled DNN-based detection models (e.g., credit risk detection) based on nonuniformed perturbations provided by PGD [37]. Sheatsley et al. [45] presented a formal logic framework to learn domain constraints from data used in Network Intrusion Detection Systems (NIDSs) and phishing detection. Teuffenbach et al. [46] employed domain knowledge to group flow-based features in NIDSs. ...
... Essentially, the K-Nearest Neighbor (KNN) technique employs a distance function (such as Euclidean, Manhattan, or Minkowski) to determine the difference and similarity between two classes within a dataset [220]. In recent times, data have evolved in a variety of ways, which may not be feasible for other ML algorithms, but it is for KNNs because they make no prior assumptions about the data [221]. ...
... Several studies have revealed the gap between experience and expectation. Many users may not be aware of the extent of information collected in the background (Mehrnezhad et al. 2017) that could potentially be traded for developer's profit in exchange for "free" apps (Alvarez et al. 2019;Isaac 2017). Users might be surprised and feel uneasy when confronted with such possibilities (Jung, J, Han & Wetherall 2012;Shih, Liccardi & Weitzner 2015; Thompson, C et al. 2013). ...
... These attacks specifically aim to obtain Bitcoin and blockchain users' private keys through social engineering methods [169]- [171], fake wallets [172], [173], and key-stealing trojan malware [174]- [176]. Although these attacks and their countermeasures [177]- [179] have been studied extensively in the literature [180], their impact in the Bitcoin and blockchain domain has not been investigated yet and can lead to new research directions. ...
... By training on classification models with these network traffic, attackers could infer the application types to thus conduct classification. From [27,28], we build an attack model, which is specifically described as follows: attackers attempt to classify the network traffic T F being observed into type i of application set C, in which:C = fc 1 , c 2 , c 3 ⋯ c i , ⋯c n g. The feature set of network traffic T F is X, X = fx 1 , x 2 , x 3 ⋯ x m g. ...
... The application of LUPI in the cybersecurity domain has also been investigated in the context of detecting malicious botnet activities in IT networks [22]. LUPI was also investigated in [23] to train anomaly detection systems using forensic data as privileged information in a variety of security applications, such as face authentication, fast-flux Bot Detection and Malware Traffic Detection. However, the application of LUPI to NIDS for ICS has not been investigated before. ...
... Secure and private computation of statistical models is increasingly used in different operational settings from healthcare [1]- [3] to finance [4] and security sensitive applications [5], [6]. Given the distributed nature of these applications, security and privacy are mostly achieved by utilizing Secure Multiparty Computation (SMC). ...