Context-Aware Learning to Rank with Self-AttentionDownload PDF

21 Oct 2019 (modified: 18 Jun 2020)OpenReview Anonymous Preprint Blind SubmissionReaders: Everyone
Abstract: In learning to rank, one is interested in optimising the global ordering of a list of items according to their utility for users. Popular approaches learn a scoring function that scores items individually (i.e. without the context of other items in the list) by optimising a pointwise, pairwise or listwise loss. The list is then sorted in the descending order of the scores. Possible interactions between items present in the same list are taken into account in the training phase at the loss level. However, during inference, items are scored individually, and possible interactions between them are not considered. In this paper, we propose a context-aware neural network model that learns item scores by applying a self-attention mechanism. The relevance of a given item is thus determined in the context of all other items present in the list, both in training and in inference. Finally, we empirically demonstrate significant performance gains of self-attention based neural architecture over Multi-Layer Perceptron baselines. This effect is consistent across popular pointwise, pairwise and listwise losses on datasets with both implicit and explicit relevance feedback.
Keywords: learning to rank, self-attention, context-aware ranking
TL;DR: Learning to rank using the Transformer architecture.
0 Replies

Loading