Giovanni Apruzzese

Giovanni Apruzzese
University of Liechtenstein · Institute of Information Systems

Ph.D. in Information and Communication Technologies

About

32
Publications
13,416
Reads
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
602
Citations
Citations since 2017
32 Research Items
602 Citations
2017201820192020202120222023050100150200250
2017201820192020202120222023050100150200250
2017201820192020202120222023050100150200250
2017201820192020202120222023050100150200250
Introduction
My research combines Cybersecurity and Big Data Analytics with the goal of detecting traces of illicit or anomalous activities. In my research efforts, I strive to find the most appropriate answers by leveraging and combining state-of-the-art techniques and algorithms, especially those within the fields of Time Series analysis and Machine Learning. My expertise lies in analyzing network-related data (such as Network Intrusion Detection System logs, network Flows and Packet Captures), and have recently started focusing on Phishing webpage identification. During the second half of my PhD course, I explored the topic of Adversarial Attacks against machine learning cyber-detectors.
Additional affiliations
September 2017 - December 2018
Università degli Studi di Modena e Reggio Emilia
Position
  • Research Assistant
Description
  • Teaching Assistant for the course "Information Security" (Master Degree in Computer Engineering)
November 2016 - October 2019
Università degli Studi di Modena e Reggio Emilia
Position
  • PhD Student
Education
January 2019 - August 2019
Dartmouth College
Field of study
  • Computer Science
November 2016 - February 2020
Università degli Studi di Modena e Reggio Emilia
Field of study
  • Computer Engineering
December 2013 - July 2016
Università degli Studi di Modena e Reggio Emilia
Field of study
  • Computer Engineering

Publications

Publications (32)
Article
Full-text available
Several advanced cyber attacks adopt the technique of "pivoting" through which attackers create a command propagation tunnel through two or more hosts in order to reach their final target. Identifying such malicious activities is one of the most tough research problems because of several challenges: command propagation is a rare event that cannot b...
Conference Paper
Full-text available
Machine learning is adopted in a wide range of domains where it shows its superiority over traditional rule-based algorithms. These methods are being integrated in cyber detection systems with the goal of supporting or even replacing the first level of security analysts. Although the complete automation of detection and analysis is an enticing goal...
Article
Full-text available
As cybersecurity detectors increasingly rely on machine learning mechanisms, attacks to these defenses escalate as well. Supervised classifiers are prone to adver-sarial evasion, and existing countermeasures suffer from many limitations. Most solutions degrade performance in the absence of adversarial perturbations; they are unable to face novel at...
Article
Machine learning algorithms are effective in several applications, but they are not as much successful when applied to intrusion detection in cyber security. Due to the high sensitivity to their training data, cyber detectors based on machine learning are vulnerable to targeted adversarial attacks that involve the perturbation of initial samples. E...
Preprint
Full-text available
Recent years have seen a proliferation of research on adversarial machine learning. Numerous papers demonstrate powerful algorithmic attacks against a wide variety of machine learning (ML) models, and numerous other papers propose defenses that can withstand most attacks. However, abundant real-world evidence suggests that actual attackers use simp...
Preprint
Full-text available
Although machine learning based algorithms have been extensively used for detecting phishing websites, there has been relatively little work on how adversaries may attack such "phishing detectors" (PDs for short). In this paper, we propose a set of Gray-Box attacks on PDs that an adversary may use which vary depending on the knowledge that he has a...
Preprint
Full-text available
The Smart Grid (SG) is a cornerstone of modern society, providing the energy required to sustain billions of lives and thousands of industries. Unfortunately, as one of the most critical infrastructures of our World, the SG is an attractive target for attackers. The problem is aggravated by the increasing adoption of digitalisation, which further i...
Preprint
Full-text available
Existing literature on adversarial Machine Learning (ML) focuses either on showing attacks that break every ML model, or defenses that withstand most attacks. Unfortunately, little consideration is given to the actual \textit{cost} of the attack or the defense. Moreover, adversarial samples are often crafted in the "feature-space", making the corre...
Preprint
Full-text available
Did you know that over 70 million of Dota2 players have their in-game data freely accessible? What if such data is used in malicious ways? This paper is the first to investigate such a problem. Motivated by the widespread popularity of video games, we propose the first threat model for Attribute Inference Attacks (AIA) in the Dota2 context. We expl...
Article
Full-text available
Machine Learning (ML) represents a pivotal technology for current and future information systems, and many domains already leverage the capabilities of ML. However, deployment of ML in cybersecurity is still at an early stage, revealing a significant discrepancy between research and practice. Such discrepancy has its root cause in the current state...
Preprint
Full-text available
Fifth Generation (5G) networks must support billions of heterogeneous devices while guaranteeing optimal Quality of Service (QoS). Such requirements are impossible to meet with human effort alone, and Machine Learning (ML) represents a core asset in 5G. ML, however, is known to be vulnerable to adversarial examples; moreover, as our paper will show...
Preprint
Full-text available
Machine Learning (ML) represents a pivotal technology for current and future information systems, and many domains already leverage the capabilities of ML. However, deployment of ML in cybersecurity is still at an early stage, revealing a significant discrepancy between research and practice. Such discrepancy has its root cause in the current state...
Preprint
Full-text available
Machine learning (ML) has become an important paradigm for cyberthreat detection (CTD) in the recent years. A substantial research effort has been invested in the development of specialized algorithms for CTD tasks. From the operational perspective, however, the progress of ML-based CTD is hindered by the difficulty in obtaining the large sets of l...
Preprint
Full-text available
We propose to generate adversarial samples by modifying activations of upper layers encoding semantically meaningful concepts. The original sample is shifted towards a target sample, yielding an adversarial sample, by using the modified activations to reconstruct the original sample. A human might (and possibly should) notice differences between th...
Preprint
Full-text available
Enhancing Network Intrusion Detection Systems (NIDS) with supervised Machine Learning (ML) is tough. ML-NIDS must be trained and evaluated, operations requiring data where benign and malicious samples are clearly labelled. Such labels demand costly expert knowledge, resulting in a lack of real deployments, as well as on papers always relying on the...
Article
Although machine learning based algorithms have been extensively used for detecting phishing websites, there has been relatively little work on how adversaries may attack such “phishing detectors” (PDs for short). In this paper, we propose a set of Gray-Box attacks on PDs that an adversary may use which vary depending on the knowledge that he has a...
Article
Full-text available
The incremental diffusion of machine learning algorithms in supporting cybersecurity is creating novel defensive opportunities but also new types of risks. Multiple researches have shown that machine learning methods are vulnerable to adversarial attacks that create tiny perturbations aimed at decreasing the effectiveness of detecting threats. We o...
Preprint
Full-text available
The incremental diffusion of machine learning algorithms in supporting cybersecurity is creating novel defensive opportunities but also new types of risks. Multiple researches have shown that machine learning methods are vulnerable to adversarial attacks that create tiny perturbations aimed at decreasing the effectiveness of detecting threats. We o...
Preprint
Full-text available
Recent advances in deep learning renewed the research interests in machine learning for Network Intrusion Detection Systems (NIDS). Specifically, attention has been given to sequential learning models, due to their ability to extract the temporal characteristics of Network traffic Flows (NetFlows), and use them for NIDS tasks. However, the applicat...
Conference Paper
Full-text available
Pivoting is a technique used by cyber attackers to exploit the privileges of compromised hosts in order to reach their final target. Existing research on countering this menace is only effective for pivoting activities spanning within the internal network perimeter. When applying existing methods to include external traffic, the detection algorithm...
Article
Full-text available
We present the first dataset that aims to serve as a benchmark to validate the resilience of botnet detectors against adversarial attacks. This dataset includes realistic adversarial samples that are generated by leveraging two widely used Deep Reinforcement Learning (DRL) techniques. These adversarial samples are proved to evade state of the art d...
Article
Full-text available
Adversarial attacks represent a critical issue that prevents the reliable integration of machine learning methods into cyber defense systems. Past work has shown that even proficient detectors are highly affected just by small perturbations to malicious samples, and that existing countermeasures are immature. We address this problem by presenting A...
Preprint
Full-text available
Machine learning algorithms are effective in several applications, but they are not as much successful when applied to intrusion detection in cyber security. Due to the high sensitivity to their training data, cyber detectors based on machine learning are vulnerable to targeted adversarial attacks that involve the perturbation of initial samples. E...
Conference Paper
Full-text available
Classifiers based on Machine Learning are vulnerable to adversarial attacks, which involve the creation of malicious samples that are not classified correctly. While this phenomenon has been extensively studied within the image processing domain, comprehensive analyses are scarce in the cybersecurity field. This is a critical problem because cyber-...

Network

Cited By