Exploiting Correlated Auxiliary Feedback in Parameterized Bandits

Published: 21 Sept 2023, Last Modified: 16 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Parameterized Bandits, Auxiliary Feedback, Control Variate, Regret Minimization
TL;DR: This paper develops a method that uses additional information about rewards to reduce regret and characterizes regret reduction in terms of the correlation between additional information and reward.
Abstract: We study a novel variant of the parameterized bandits problem in which the learner can observe additional auxiliary feedback that is correlated with the observed reward. The auxiliary feedback is readily available in many real-life applications, e.g., an online platform that wants to recommend the best-rated services to its users can observe the user's rating of service (rewards) and collect additional information like service delivery time (auxiliary feedback). In this paper, we first develop a method that exploits auxiliary feedback to build a reward estimator with tight confidence bounds, leading to a smaller regret. We then characterize the regret reduction in terms of the correlation coefficient between reward and its auxiliary feedback. Experimental results in different settings also verify the performance gain achieved by our proposed method.
Supplementary Material: zip
Submission Number: 9871