Improving On-Demand Learning to Rank through Parallelism

Published: 01 Jan 2012, Last Modified: 16 May 2025WISE 2012EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Traditional Learning to Rank (L2R) is usually conducted in a batch mode in which a single ranking function is learned to order results for future queries. This approach is not flexible since future queries may differ considerably from those present in the training set and, consequently, the learned function may not work properly. Ideally, a distinct learning function should be learned on demand for each query. Nevertheless, on-demand L2R may significantly degrade the query processing time, as the ranking function has to be learned on-the-fly before it can be applied. In this paper we present a parallel implementation of an on-demand L2R technique that reduces drastically the response time of previous serial implementation. Our implementation makes use of thousands of threads of a GPU to learn a ranking function for each query, and takes advantage of a reduced training set obtained through active learning. Experiments with the LETOR benchmark show that our proposed approach achieves a mean speedup of 127x in query processing time when compared to the sequential version, while producing very competitive ranking effectiveness.
Loading