Corruption Robust Thompson Sampling for Gaussian Bandits
Abstract: Thompson sampling is one of the most popular learning algorithms for online sequential decision-making problems and has rich real-world applications. However, traditional Thompson sampling algorithms are limited by the assumption that the rewards received are uncorrupted, which may not hold in real-world applications where adversarial reward poisoning exists. To make Thompson sampling more reliable, our goal is to make it robust against adversarial reward poisoning. Particularly, we consider a strong attack threat model where an adversary applies corruption after observing the agent's actions. The main challenge is that one can no longer compute the actual posteriors for the true reward, as the agent can only observe the rewards after corruption. In this work, we solve this problem by computing pseudo-posteriors that are less likely to be manipulated by the attack. Particularly, we focus on two popular settings: stochastic bandits and contextual linear bandits with priors as Gaussian distributions. **We are the first** to propose robust algorithms based on Thompson sampling for the two bandit settings in both cases where the agent is aware or unaware of the attacker's budget. We theoretically show that our algorithms guarantee near-optimal regret under any attack strategy.
Submission Number: 149
Loading