TL;DR: We design new differentially private algorithms with improved privacy and utility tradeoffs for adversarial bandits and bandits with expert advice.
Abstract: We design new differentially private algorithms for the problems of adversarial bandits and bandits with expert advice. For adversarial bandits, we give a simple and efficient conversion of any non-private bandit algorithm to a private bandit algorithm. Instantiating our conversion with existing non-private bandit algorithms gives a regret upper bound of $O\left(\frac{\sqrt{KT}}{\sqrt{\epsilon}}\right)$, improving upon the existing upper bound $O\left(\frac{\sqrt{KT \log(KT)}}{\epsilon}\right)$ for all $\epsilon \leq 1$. In particular, our algorithms allow for sublinear expected regret even when $\epsilon \leq \frac{1}{\sqrt{T}}$, establishing the first known separation between central and local differential privacy for this problem. For bandits with expert advice, we give the first differentially private algorithms, with expected regret $O\left(\frac{\sqrt{NT}}{\sqrt{\epsilon}}\right), O\left(\frac{\sqrt{KT\log(N)}\log(KT)}{\epsilon}\right)$, and $\tilde{O}\left(\frac{N^{1/6}K^{1/2}T^{2/3}\log(NT)}{\epsilon ^{1/3}} + \frac{N^{1/2}\log(NT)}{\epsilon}\right)$, where $K$ and $N$ are the number of actions and experts respectively. These rates allow us to get sublinear regret for different combinations of small and large $K, N$ and $\epsilon.$
Lay Summary: In many applications—like online advertising, medical trials, or recommendation systems—machine learning algorithms must sequentially make decisions over time while protecting sensitive user data. This paper tackles this challenge in a difficult setting called adversarial bandits, where the environment can behave unpredictably and only limited feedback is available at each time step. We design new algorithms that are both differentially private (meaning they rigorously protect individual data) and competitive (meaning their performance remains strong over time). We show how to transform any existing bandit algorithm into a private one with improved performance guarantees, particularly in regimes with high privacy requirements. Importantly, our methods achieve better outcomes than all previous approaches and uncover a fundamental gap between two models of privacy (central and local). Using these techniques, we also provide the first private algorithms for a related problem called adversarial bandits with expert advice, enabling private decision-making for more personalized applications.
Primary Area: Social Aspects->Privacy
Keywords: Differential Privacy, Bandits
Submission Number: 7104
Loading