Multi-Armed Bandits with Interference: Bridging Causal Inference and Adversarial Bandits

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We study a multiple-play adversarial bandits problem where the reward of a unit depends on the arms of ALL units
Abstract: Experimentation with interference poses a significant challenge in contemporary online platforms. Prior research on experimentation with interference has concentrated on the final output of a policy. Cumulative performance, while equally important, is less well understood. To address this gap, we introduce the problem of Multi-armed Bandits with Interference (MABI), where the learner assigns an arm to each of $N$ experimental units over $T$ rounds. The reward of each unit in each round depends on the treatments of all units, where the interference between two units decays in their distance. The reward functions are chosen by an adversary and may vary arbitrarily over time and across different units. We first show that the optimal expected regret (against the best fixed-arm policy) is $\tilde O(\sqrt T)$, and can be achieved by a switchback policy. However, the regret (as a random variable) for any switchback policy suffers a high variance, since it does not account for $N$. We propose a policy based on a novel clustered randomization scheme, whose regret (i) is optimal in expectation and (ii) admits a high probability bound that vanishes in $N$.
Lay Summary: Imagine we’re running an online food delivery platform and want to test multiple promotion campaigns (i.e., treatments) to maximize total revenue over a sales season. A key challenge is interference between locations (e.g., ZIP codes): the effectiveness of a promotion at one location can depend heavily on what promotions are assigned to nearby locations, since they may compete for shared resources like delivery drivers. A naive approach is to assign promotions independently to each location and average the resulting revenues. Another naive method is switchback: assigning the historically best-performing promotion to all locations in each period. We propose a more effective alternative based on clustered randomization: we first group locations into clusters and assign promotions at the cluster level, favoring arms with strong historical performance. We show that this approach outperforms the above baselines by achieving the best possible worst-case expected revenue while also being more robust — i.e., significantly less likely to result in very low revenue.
Primary Area: Theory->Online Learning and Bandits
Keywords: interference, adversarial bandits, high-probability regret bound
Submission Number: 14449
Loading