ArticlePDF Available

Abstract

One of the most critical data analysis tasks is the streaming data classification, where we may also observe the concept drift phenomenon, i.e., changing the decision model’s probabilistic characteristics. From a practical point of view, we may face this type of banking, medicine, or cybersecurity task to enumerate only a few. A vital characteristic of these problems is that the classes we are interested in (e.g., fraudulent transactions, treats, or serious diseases) are usually infrequent, which hinders the classification system design. The paper presents a novel algorithm DSCB (Deterministic Sampling Classifier with weighted Bagging) employs data preprocessing methods and weighted bagging technique to classify non-stationary imbalanced data stream. It builds models based on an incoming data chunk, but it also takes previously arrived instances into account. The proposed approach has been evaluated based on a wide range of computer experiments carried out on real and artificially generated data streams with various imbalance ratios, label noise levels, and concept drift types. The results confirmed that the weighted bagging ensemble coupled with data preprocessing could outperform state-of-the-art methods.
FREEE ACCESS by June 12, 2022
https://authors.elsevier.com/a/1eyma5aecSniqS
ResearchGate has not been able to resolve any citations for this publication.
Conference Paper
Full-text available
Continual learning from data streams is among the most important topics in contemporary machine learning. One of the biggest challenges in this domain lies in creating algorithms that can continuously adapt to arriving data. However, previously learned knowledge may become outdated, as streams evolve over time. This phenomenon is known as concept drift and must be detected to facilitate efficient adaptation of the learning model. While there exists a plethora of drift detectors, all of them assume that we are dealing with roughly balanced classes. In the case of imbalanced data streams, those detectors will be biased towards the majority classes, ignoring changes happening in the minority ones. Furthermore, class imbalance may evolve over time and classes may change their roles (majority becoming minority and vice versa). This is especially challenging in the multi-class setting, where relationships among classes become complex. In this paper, we propose a detailed taxonomy of challenges posed by concept drift in multi-class imbalanced data streams, as well as a novel trainable concept drift detector based on Restricted Boltzmann Machine. It is capable of monitoring multiple classes at once and using reconstruction error to detect changes in each of them independently. Our detector utilizes a skew-insensitive loss function that allows it to handle multiple imbalanced distributions. Due to its trainable nature, it is capable of following changes in a stream and evolving class roles, as well as it can deal with local concept drift occurring in minority classes. Extensive experimental study on multi-class drifting data streams, enriched with a detailed analysis of the impact of local drifts and changing imbalance ratios, confirms the high efficacy of our approach.
Article
Full-text available
Class-imbalanced datasets are common across several domains such as health, banking, security, and others. The dominance of majority class instances (negative class) often results in biased learning models, and therefore, classifying such datasets requires employing some methods to compact the problem. In this paper, we propose a new hybrid approach aiming at reducing the dominance of the majority class instances using class decomposition and increasing the minority class instances using an oversampling method. Unlike other undersampling methods, which suffer data loss, our method preserves the majority class instances, yet significantly reduces its dominance, resulting in a more balanced dataset and hence improving the results. A large-scale experiment using 60 public datasets was carried out to validate the proposed methods. The results across three standard evaluation metrics show the comparable and superior results with other common and state-of-the-art techniques.
Article
Full-text available
Streaming data are increasingly present in real-world applications such as sensor measurements, satellite data feed, stock market, and financial data. The main characteristics of these applications are the online arrival of data observations at high speed and the susceptibility to changes in the data distributions due to the dynamic nature of real environments. The data stream mining community still faces some primary challenges and difficulties related to the comparison and evaluation of new proposals, mainly due to the lack of publicly available high quality non-stationary real-world datasets. The comparison of stream algorithms proposed in the literature is not an easy task, as authors do not always follow the same recommendations, experimental evaluation procedures, datasets, and assumptions. In this paper, we mitigate problems related to the choice of datasets in the experimental evaluation of stream classifiers and drift detectors. To that end, we propose a new public data repository for benchmarking stream algorithms with real-world data. This repository contains the most popular datasets from literature and new datasets related to a highly relevant public health problem that involves the recognition of disease vector insects using optical sensors. The main advantage of these new datasets is the prior knowledge of their characteristics and patterns of changes to adequately evaluate new adaptive algorithms. We also present an in-depth discussion about the characteristics, reasons, and issues that lead to different types of changes in data distribution, as well as a critical review of common problems concerning the current benchmark datasets available in the literature.
Article
stream-learn is a Python package compatible with scikit-learn and developed for the drifting and imbalanced data stream analysis. Its main component is a stream generator, which allows producing a synthetic data stream that may incorporate each of the three main concept drift types (i.e., sudden, gradual and incremental drift) in their recurring or non-recurring version, as well as static and dynamic class imbalance. The package allows conducting experiments following established evaluation methodologies (i.e., Test-Then-Train and Prequential). Besides, estimators adapted for data stream classification have been implemented, including both simple classifiers and state-of-the-art chunk-based and online classifier ensembles. The package utilises its own implementations of prediction metrics for imbalanced binary classification tasks to improve computational efficiency.
Article
In many practical applications, due to the inability to collect complete training data sets at one time, the adaptability of the classifier is poor. Online ensemble learning can better solve this problem. However, most of the data streams are imbalanced. Imbalanced data stream will greatly affect the performance of online ensemble learning algorithm. To reduce the impact of imbalanced data stream, this paper proposes a cost sensitive online ensemble learning algorithm for imbalanced data stream. The algorithm uses a variety of equalization methods, mainly including the construction of initial base-classifier, dynamic calculation of misclassification cost, sampling method of samples in data stream and calculation of weight of base-classifier. Those methods can reduce the influence of imbalanced data stream and improve the classification performance under imbalanced data stream. The experimental results show that the performance of the proposed algorithm has the better classification performance for imbalanced data stream. Finally, the algorithm is applied to the network intrusion detection, and the simulation experiment on NSL-KDD data set can reduce the missing alarm rate and the false alarm rate. The experimental results show that the algorithm can improve the detection accuracy, especially the recognition rate of unknown intrusion behavior.
Article
Many researchers working on classification problems evaluate the quality of developed algorithms based on computer experiments. The conclusions drawn from them are usually supported by the statistical analysis and chosen experimental protocol. Statistical tests are widely used to confirm whether considered methods significantly outperform reference classifiers. Usually, the tests are applied to stratified datasets, which could raise the question of whether data folds used for classification are really randomly drawn and how the statistical analysis supports robust conclusions. Unfortunately, some scientists do not realize the real meaning of the obtained results and overinterpret them. They do not see that inappropriate use of such analytical tools may lead them into a trap. This paper aims to show the commonly used experimental protocols’ weaknesses and discuss if we really can trust in such evaluation methodology, if all presented evaluations are fair and if it is possible to manipulate the experimental results using well-known statistical evaluation methods. We will present that it is possible to choose only such results, confirming the experimenter’s expectation. We will try to show what could be done to avoid such likely unethical behavior. At the end of this work, we will formulate recommendations on improving an experimental protocol to design fair experimental classifier evaluation.
Article
The imbalanced data classification remains a vital problem. The key is to find such methods that classify both the minority and majority class correctly. The paper presents the classifier ensemble for classifying binary, non-stationary and imbalanced data streams where the Hellinger Distance is used to prune the ensemble. The paper includes an experimental evaluation of the method based on the conducted experiments. The first one checks the impact of the base classifier type on the quality of the classification. In the second experiment, the Hellinger Distance Weighted Ensemble (hdwe) method is compared to selected state-of-the-art methods using a statistical test with two base classifiers. The method was profoundly tested based on many imbalanced data streams and obtained results proved the hdwe method's usefulness.
Article
free access till end of October 2020 -> use this link https://authors.elsevier.com/a/1blq25a7-GjBOl This work aims to connect two rarely combined research directions, i.e., non-stationary data stream classification and data analysis with skewed class distributions. We propose a novel framework employing stratified bagging for training base classifiers to integrate data preprocessing and dynamic ensemble selection methods for imbalanced data stream classification. The proposed approach has been evaluated based on computer experiments carried out on 135 artificially generated data streams with various imbalance ratios, label noise levels, and types of concept drift as well as on two selected real streams. Four preprocessing techniques and two dynamic selection methods, used on both bagging classifiers and base estimators levels, were considered. Experimentation results showed that, for highly imbalanced data streams, dynamic ensemble selection coupled with data preprocessing could outperform online and chunk-based state-of-art methods.
Article
The basic question addressed in this paper is: how can a learning algorithm cope with incorrect training examples? Specifically, how can algorithms that produce an “approximately correct” identification with “high probability” for reliable data be adapted to handle noisy data? We show that when the teacher may make independent random errors in classifying the example data, the strategy of selecting the most consistent rule for the sample is sufficient, and usually requires a feasibly small number of examples, provided noise affects less than half the examples on average. In this setting we are able to estimate the rate of noise using only the knowledge that the rate is less than one half. The basic ideas extend to other types of random noise as well. We also show that the search problem associated with this strategy is intractable in general. However, for particular classes of rules the target rule may be efficiently identified if we use techniques specific to that class. For an important class of formulas – the k-CNF formulas studied by Valiant – we present a polynomial-time algorithm that identifies concepts in this form when the rate of classification errors is less than one half.