Project

Multiobjective Classifier Learning for Chosen Decision Tasks

Goal: The project will focus on the possibility of overcoming the above-mentioned difficulties by using multi-criteria optimization methods, returning a set of Pareto-optimal solutions, enabling the user to select a specific classification model, proposing automatic methods of its selection, or aggregation of acceptable models using the combined classification paradigm. In this project, we form a hypothesis that:
It is possible to propose classifier learning algorithms using multicriteria optimization, returning a set of Pareto-optimal models, with individual prediction quality at least as good as the quality of classifiers trained using aggregated criteria.

Updates
0 new
0
Recommendations
0 new
0
Followers
0 new
3
Reads
1 new
20

Project log

Michal Wozniak
added 2 research items
Many different decision problems require taking a compromise between the various goals we want to achieve into account. A specific group of features often decides the state of a given object. An example of such a task is the feature selection that allows increasing the decision’s quality while minimizing the cost of features or the total budget. The work’s main purpose is to compare feature selection methods such as the classical approach, the one-objective optimization, and the multi-objective optimization. The article proposes a feature selection algorithm using the Genetic Algorithm with various criteria, i.e., the cost and accuracy. In this way, the optimal Pareto points for the nonlinear problem of multi-criteria optimization were obtained. These points constitute a compromise between two conflicting objectives. By carrying out various experiments on various base classifiers, it has been shown that the proposed approach can be used in the task of optimizing difficult data.
One of the vital problems with the imbalanced data classifier training is the definition of an optimization criterion. Typically, since the exact cost of misclassification of the individual classes is unknown, combined metrics and loss functions that roughly balance the cost for each class are used. However, this approach can lead to a loss of information, since different trade-offs between class misclassification rates can produce similar combined metric values. To address this issue, this paper discusses a multi-criteria ensemble training method for the imbalanced data. The proposed method jointly optimizes precision and recall , and provides the end-user with a set of Pareto optimal solutions, from which the final one can be chosen according to the user’s preference. The proposed approach was evaluated on a number of benchmark datasets and compared with the single-criterion approach (where the selected criterion was one of the chosen metrics). The results of the experiments confirmed the usefulness of the obtained method, which on the one hand guarantees good quality, i.e., not worse than the one obtained with the use of single-criterion optimization, and on the other hand, offers the user the opportunity to choose the solution that best meets their expectations regarding the trade-off between errors on the minority and the majority class.
Michal Wozniak
added a project goal
The project will focus on the possibility of overcoming the above-mentioned difficulties by using multi-criteria optimization methods, returning a set of Pareto-optimal solutions, enabling the user to select a specific classification model, proposing automatic methods of its selection, or aggregation of acceptable models using the combined classification paradigm. In this project, we form a hypothesis that:
It is possible to propose classifier learning algorithms using multicriteria optimization, returning a set of Pareto-optimal models, with individual prediction quality at least as good as the quality of classifiers trained using aggregated criteria.