Keywords: Pairwise preferences, Feature-based modeling, Bradley-Terry-Luce, Least squares algorithm, Sample complexity analysis, Graph matching theory, Stochastic comparison models, Item feature correlation, Information-theoretic lower bound, Experimental evaluations
TL;DR: We introduce the feature-Bradley-Terry-Luce model and propose a least squares algorithm with reduced sample complexity for ranking items based on pairwise preferences, leveraging item feature correlation and graph matching theory.
Abstract: We consider the problem of ranking a set of $n$ items given a sample of their pairwise preferences. It is well known from the classical results of sorting literature that without any further assumption, one requires a sample size of $\Omega(n \log n)$ with active selection of pairs whereas, for a random set pairwise preferences the bound could be as bad as $\Omega(n^2)$. However, what if the learner is exposed to additional knowledge of the items features and their pairwise preferences are known to be modelled in terms of their feature similarities -- can these bounds be improved? In particular, we introduce a new probabilistic preference model, called feature-Bradley-Terry-Luce (f-BTL) for the purpose, and present a new least squares based algorithm, fBTL-LS, which requires a sample complexity much lesser than $O(n\log n)$ random pairs to obtain a `good' ranking. The sample complexity of our proposed algorithms depends on the degree of feature correlation of the items that makes use of tools from classical graph matching theory, shedding light on the true complexity of the problem -- this was not possible before with existing matrix completion based tools. We also prove tightness of our results showing a matching information theoretic lower bound for the problem. Our theoretical results are corroborated with extensive experimental evaluations on varying datasets.
List Of Authors: Saha, Aadirupa and Rajkumar, Arun
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 511
Loading