Bandit Algorithm for both Unknown Best Position and Best Item Display on Web PagesOpen Website

2021 (modified: 03 Nov 2022)IDA 2021Readers: Everyone
Abstract: Multiple-play bandits aim at displaying relevant items at relevant positions on a web page. We introduce a new bandit-based algorithm, PB-MHB, for online recommender systems which uses the Thompson sampling framework with Metropolis-Hastings approximation. This algorithm handles a display setting governed by the position-based model. Our sampling method does not require as input the probability of a user to look at a given position in the web page which is difficult to obtain in some applications. Experiments on simulated and real datasets show that our method, with fewer prior information, delivers better recommendations than state-of-the-art algorithms.
0 Replies

Loading