Robust penalized logistic regression with truncated loss functions

Department of Health Studies, Chicago, IL 60615, USA.
Canadian Journal of Statistics (Impact Factor: 0.7). 06/2011; 39(2):300-323. DOI: 10.1002/cjs.10105
Source: PubMed

ABSTRACT The penalized logistic regression (PLR) is a powerful statistical tool for classification. It has been commonly used in many practical problems. Despite its success, since the loss function of the PLR is unbounded, resulting classifiers can be sensitive to outliers. To build more robust classifiers, we propose the robust PLR (RPLR) which uses truncated logistic loss functions, and suggest three schemes to estimate conditional class probabilities. Connections of the RPLR with some other existing work on robust logistic regression have been discussed. Our theoretical results indicate that the RPLR is Fisher consistent and more robust to outliers. Moreover, we develop estimated generalized approximate cross validation (EGACV) for the tuning parameter selection. Through numerical examples, we demonstrate that truncating the loss function indeed yields better performance in terms of classification accuracy and class probability estimation.

  • [Show abstract] [Hide abstract]
    ABSTRACT: Feature selection is an important preprocessing step in machine learning and pattern recognition. It is also a data mining task in some real-world applications. Feature quality evaluation is a key issue when designing an algorithm for feature selection. The classification margin has been used widely to evaluate feature quality in recent years. In this study, we introduce a robust loss function, called Brownboost loss, which computes the feature quality and selects the optimal feature subsets to enhance robustness. We compute the classification loss in a feature space with hypothesis-margin and minimize the loss by optimizing the weights of features. An algorithm is developed based on gradient descent using L2-norm regularization techniques. The proposed algorithm is tested using UCI datasets and gene expression datasets, respectively. The experimental results show that the proposed algorithm is effective in improving the classification robustness.
    Knowledge-Based Systems 12/2013; 54:180–198. DOI:10.1016/j.knosys.2013.09.005 · 3.06 Impact Factor


Available from