Keywords: anonymity, multi-armed bandits, online learning
Abstract: In this work, we present and study a new framework for online learning in systems with multiple users that provide user anonymity. Specifically, we extend the notion of bandits to obey the standard $k$-anonymity constraint by requiring each observation to be an aggregation of rewards for at least $k$ users. This provides a simple yet effective framework where one can learn a clustering of users in an online fashion without observing any user's individual decision. We initiate the study of anonymous bandits and provide the first sublinear regret algorithms and lower bounds for this setting.
Supplementary Material: zip
TL;DR: We study multi-user multi-armed bandits under a k-anonymity constraint.
13 Replies
Loading