Keywords: Ranking, Lower Bound, Discrete Ratings, Threshold Model
TL;DR: We show the difficulty of ranking from discrete ratings by analyzing a simple model with random user thresholds that govern individual rating biases.
Abstract: Ranking items is a central task in many information retrieval and recommender systems.
User input for the ranking task often comes in the form of ratings on a coarse discrete scale.
We ask whether it is possible to recover a fine-grained item ranking from such coarse-grained ratings.
We model items as having scores and users as having thresholds; a user likes an item if the score exceeds the threshold, and dislikes it otherwise.
Although all users implicitly agree on the total item order, estimating that order is challenging when both the scores and the thresholds are latent.
Under our model, any ranking method naturally partitions the $n$ items into bins; the bins are ordered, but the items inside each bin are still unordered.
Users arrive sequentially, and every new user can be queried to refine the current ranking.
We prove that achieving a near-perfect ranking, measured by Spearman distance, requires $\Theta(n^2)$ users (and therefore $\Omega(n^2)$ queries).
This is significantly worse than the $O(n\log n)$ queries needed to rank either from comparisons or from ratings with known user thresholds; the gap reflects the additional queries needed to estimate each user's latent threshold.
Our bound also quantifies the impact of a mismatch between the score and threshold distributions via a quadratic divergence factor.
To show the tightness of our results, we provide a ranking algorithm whose query complexity matches our bound up to a logarithmic factor.
Our work reveals a tension in online ranking: diversity in thresholds is necessary to merge coarse ratings from many users into a fine-grained ranking, but this diversity has a cost if the thresholds are a priori unknown.
Submission Number: 62
Loading