Preference-based learning to rank

Machine Learning (Impact Factor: 1.89). 09/2010; 80(2-3):189-211. DOI: 10.1007/s10994-010-5176-9
Source: DBLP


This paper presents an efficient preference-based ranking algorithm running in two stages. In the first stage, the algorithm
learns a preference function defined over pairs, as in a standard binary classification problem. In the second stage, it makes
use of that preference function to produce an accurate ranking, thereby reducing the learning problem of ranking to binary
classification. This reduction is based on the familiar QuickSort and guarantees an expected pairwise misranking loss of at
most twice that of the binary classifier derived in the first stage. Furthermore, in the important special case of bipartite
ranking, the factor of two in loss is reduced to one. This improved bound also applies to the regret achieved by our ranking
and that of the binary classifier obtained.

Our algorithm is randomized, but we prove a lower bound for any deterministic reduction of ranking to binary classification
showing that randomization is necessary to achieve our guarantees. This, and a recent result by Balcan et al., who show a
regret bound of two for a deterministic algorithm in the bipartite case, suggest a trade-off between achieving low regret
and determinism in this context.

Our reduction also admits an improved running time guarantee with respect to that deterministic algorithm. In particular,
the number of calls to the preference function in the reduction is improved from Ω(n
2) to O(nlog n). In addition, when the top k ranked elements only are required (k≪n), as in many applications in information extraction or search engine design, the time complexity of our algorithm can be
further reduced to O(klog k+n). Our algorithm is thus practical for realistic applications where the number of points to rank exceeds several thousand.

4 Reads
Show more


4 Reads
Available from