Article

Preference-based learning to rank

Machine Learning (Impact Factor: 1.89). 09/2010; 80(2-3):189-211. DOI: 10.1007/s10994-010-5176-9
Source: DBLP

ABSTRACT

This paper presents an efficient preference-based ranking algorithm running in two stages. In the first stage, the algorithm learns a preference function defined over pairs, as in a standard binary classification problem. In the second stage, it makes use of that preference function to produce an accurate ranking, thereby reducing the learning problem of ranking to binary classification. This reduction is based on the familiar QuickSort and guarantees an expected pairwise misranking loss of at most twice that of the binary classifier derived in the first stage. Furthermore, in the important special case of bipartite ranking, the factor of two in loss is reduced to one. This improved bound also applies to the regret achieved by our ranking and that of the binary classifier obtained.
Our algorithm is randomized, but we prove a lower bound for any deterministic reduction of ranking to binary classification showing that randomization is necessary to achieve our guarantees. This, and a recent result by Balcan et al., who show a regret bound of two for a deterministic algorithm in the bipartite case, suggest a trade-off between achieving low regret and determinism in this context.
Our reduction also admits an improved running time guarantee with respect to that deterministic algorithm. In particular, the number of calls to the preference function in the reduction is improved from Ω(n
2) to O(nlog n). In addition, when the top k ranked elements only are required (k≪n), as in many applications in information extraction or search engine design, the time complexity of our algorithm can be further reduced to O(klog k+n). Our algorithm is thus practical for realistic applications where the number of points to rank exceeds several thousand.

Full-text preview

Available from: stat.uchicago.edu
  • Source
    • "There are many known regret reductions for such problems as multiclass classification [33] [47], costsensitive classification [12] [39], and ranking [2] [3]. There is also a rich body of work on so called surrogate regret bounds. "
    [Show abstract] [Hide abstract]
    ABSTRACT: We provide a summary of the mathematical and computational techniques that have enabled learning reductions to effectively address a wide class of problems, and show that this approach to solving machine learning problems can be broadly useful.
    Preview · Article · Feb 2015
  • Source
    [Show abstract] [Hide abstract]
    ABSTRACT: We study the problem of learning to rank from pairwise preferences, and solve a long-standing open problem that has led to development of many heuristics but no provable results for our particular problem. Given a set $V$ of $n$ elements, we wish to linearly order them given pairwise preference labels. A pairwise preference label is obtained as a response, typically from a human, to the question "which if preferred, u or v?$ for two elements $u,v\in V$. We assume possible non-transitivity paradoxes which may arise naturally due to human mistakes or irrationality. The goal is to linearly order the elements from the most preferred to the least preferred, while disagreeing with as few pairwise preference labels as possible. Our performance is measured by two parameters: The loss and the query complexity (number of pairwise preference labels we obtain). This is a typical learning problem, with the exception that the space from which the pairwise preferences is drawn is finite, consisting of ${n\choose 2}$ possibilities only. We present an active learning algorithm for this problem, with query bounds significantly beating general (non active) bounds for the same error guarantee, while almost achieving the information theoretical lower bound. Our main construct is a decomposition of the input s.t. (i) each block incurs high loss at optimum, and (ii) the optimal solution respecting the decomposition is not much worse than the true opt. The decomposition is done by adapting a recent result by Kenyon and Schudy for a related combinatorial optimization problem to the query efficient setting. We thus settle an open problem posed by learning-to-rank theoreticians and practitioners: What is a provably correct way to sample preference labels? To further show the power and practicality of our solution, we show how to use it in concert with an SVM relaxation.
    Preview · Article · Oct 2010 · Journal of Machine Learning Research
  • Source

    Full-text · Article · Jan 2011
Show more