Article

A Multiple Instance Learning Strategy for Combating Good Word Attacks on Spam Filters.

Journal of Machine Learning Research 01/2008; 9:1115-1146. DOI:10.1145/1390681.1390719
Source: DBLP

ABSTRACT Statistical spam filters are known to be vulnerable to adversarial attacks. One of the more common adversarial attacks, known as the good word attack, thwarts spam filters by appending to spam messages sets of "good" words, which are words that are common in legitimate email but rare in spam. We present a counterattack strategy that attempts to differentiate spam from legitimate email in the input space by transforming each email into a bag of multiple segments, and subsequently applying multiple instance logistic regression on the bags. We treat each segment in the bag as an instance. An email is classified as spam if at least one instance in the corresponding bag is spam, and as legitimate if all the instances in it are legitimate. We show that a classifier using our multiple instance counterattack strategy is more robust to good word attacks than its single instance counterpart and other single instance learners commonly used in the spam filtering domain.

0 0
 · 
0 Bookmarks
 · 
78 Views
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Machine learning has become a prominent tool in various domains owing to its adaptability. However, this adaptability can be taken advantage of by an adversary to cause dysfunction of machine learning; a process known as Adversarial Learning. This paper investigates Adversarial Learning in the context of artificial neural networks. The aim is to test the hypothesis that an ensemble of neural networks trained on the same data manipulated by an adversary would be more robust than a single network. We investigate two attack types: targeted and random. We use Mahalanobis distance and covariance matrices to selected targeted attacks. The experiments use both artificial and UCI datasets. The results demonstrate that an ensemble of neural networks trained on attacked data are more robust against the attack than a single network. While many papers have demonstrated that an ensemble of neural networks is more robust against noise than a single network, the significance of the current work lies in the fact that targeted attacks are not white noise.
    Fuzzy Systems (FUZZ), 2010 IEEE International Conference on; 08/2010
  • [show abstract] [hide abstract]
    ABSTRACT: We investigate the susceptibility of compression-based learning algorithms to adversarial attacks. We demonstrate that compression-based algorithms are surprisingly resilient to carefully plotted attacks that can easily devastate standard learning algorithms. In the worst case where we assume the adversary has a full knowledge of training data, compression-based algorithms failed as expected. We tackle the worst case with a proposal of a new technique that analyzes subsequences strategically extracted from given data. We achieved near-zero performance loss in the worst case in the domain of spam filtering.
    Advances in Knowledge Discovery and Data Mining - 15th Pacific-Asia Conference, PAKDD 2011, Shenzhen, China, May 24-27, 2011, Proceedings, Part II; 01/2011
  • Source
    [show abstract] [hide abstract]
    ABSTRACT: Experimental and theoretical evidences showed that multiple classifier systems (MCSs) can outperform single classifiers in terms of classification accuracy. MCSs are currently used in several kinds of applications, among which security applications like biometric identity recognition, intrusion detection in computer networks and spam filtering. However security systems operate in adversarial environments against intelligent adversaries who try to evade them, and are therefore characterised by the requirement of a high robustness to evasion besides a high classification accuracy. The effectiveness of MCSs in improving the hardness of evasion has not been investigated yet, and their use in security systems is mainly based on intuitive and qualitative motivations, besides some experimental evidence. In this chapter we address the issue of investigating why and how MCSs can improve the hardness of evasion of security systems in adversarial environments. To this aim we develop analytical models of adversarial classification problems (also exploiting a theoretical framework recently proposed by other authors), and apply them to analyse two strategies currently used to implement MCSs in several applications. We then give an experimental investigation of the considered strategies on a case study in spam filtering, using a large corpus of publicly available spam and legitimate e-mails, and the SpamAssassin, widely used open source spam filter.
    10/2009: pages 15-38;

Full-text

View
0 Downloads
Available from