Conference Paper

Analyzing Sensory Data Using Non-linear Preference Learning with Feature Subset Selection.

DOI: 10.1007/978-3-540-30115-8_28 Conference: Machine Learning: ECML 2004, 15th European Conference on Machine Learning, Pisa, Italy, September 20-24, 2004, Proceedings
Source: DBLP

ABSTRACT The quality of food can be assessed from dierent points of view. In this paper, we deal with those aspects that can be appreciated through sensory impressions. When we are aiming to induce a function that maps object descriptions into ratings, we must consider that con- sumers' ratings are just a way to express their preferences about the products presented in the same testing session. Therefore, we postu- late to learn from consumers' preference judgments instead of using an approach based on regression. This requires the use of special purpose kernels and feature subset selection methods. We illustrate the benefits of our approach in two families of real-world data bases.

Download full-text


Available from: Gustavo Fernandez Bayon, Jul 06, 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: Learning tasks where the set Y of classes has an ordering relation arise in a number of important application fields. In this context, the loss function may be defined in different ways, ranging from multiclass classification to ordinal or metric regression. However, to consider only the ordered structure of Y, a measure of goodness of a hypothesis h has to be related to the number of pairs whose relative ordering is swapped by h. In this paper, we present a method, based on the use of a multivariate version of Support Vector Machines (SVM) that learns to order minimizing the number of swapped pairs. Finally, using benchmark datasets, we compare the scores so achieved with those found by other alternative approaches.
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: A recommender system has to collect users' preference data. To collect such data, rating or scoring methods that use rating scales, such as good-fair-poor or a five-point-scale, have been employed. We replaced such collection methods with a ranking method, in which objects are sorted according to the degree of a user's preference. We developed a technique to convert the rankings to scores based on order statistics theory. This technique successfully improved the accuracy of ranking recommended items. However, we targeted only memory-based recommendation algorithms. To test whether or not the use of ranking methods and our conversion technique are effective for wide variety of recommenders, we apply our conversion technique to model-based algorithms.
    Proceedings of the 2010 ACM Conference on Recommender Systems, RecSys 2010, Barcelona, Spain, September 26-30, 2010; 09/2010
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: The selection of a subset of input variables is often based on the previous construction of a ranking to order the variables according to a given criterion of relevancy. The objective is then to linearize the search, estimating the quality of subsets containing the topmost ranked variables. An algorithm devised to rank input variables according to their usefulness in the context of a learning task is presented. This algorithm is the result of a combination of simple and classical techniques, like correlation and orthogonalization, which allow the construction of a fast algorithm that also deals explicitly with redundancy. Additionally, the proposed ranker is endowed with a simple polynomial expansion of the input variables to cope with nonlinear problems. The comparison with some state-of-the-art rankers showed that this combination of simple components is able to yield high-quality rankings of input variables. The experimental validation is made on a wide range of artificial data sets and the quality of the rankings is assessed using a ROC-inspired setting, to avoid biased estimations due to any particular learning algorithm.
    Computational Statistics & Data Analysis 09/2007; 52(1):578-595. DOI:10.1016/j.csda.2007.02.003 · 1.15 Impact Factor