On Adaptivity and Confounding in Contextual Bandit ExperimentsDownload PDF

09 Oct 2021, 14:49 (modified: 21 Oct 2021, 15:06)NeurIPS 2021 Workshop DistShift SpotlightReaders: Everyone
Keywords: contextual bandit, deconfounded Thompson sampling, day-of-week, effect, nonstationary confounder, distribution shift, delayed observation
TL;DR: We highlight and address an underappreciated issue in bandit experiments, where the presence of nonstationary confounders creates effective distribution shift.
Abstract: Multi-armed bandit algorithms minimize experimentation costs required to converge on optimal behavior. They do so by rapidly adapting experimentation effort away from poorly performing actions as feedback is observed. But this desirable feature makes them sensitive to confounding. We highlight, for instance, that popular bandit algorithms cannot address the problem of identifying the best action when day-of-week effects may confound inferences. In response, this paper formulates a general model of contextual bandit experiments with nonstationary contexts, which act as the confounders for inferences and can be also viewed as the distribution shifts in the earlier periods of the experiments. In addition, this general model allows the target distribution or population distribution that is used to determine the best action to be different from the empirical distribution over the contexts observed during the experiments. The paper proposes deconfounded Thompson sampling, which makes simple, but critical, modifications to the way Thompson sampling is usually applied. Theoretical guarantees suggest the algorithm strikes a delicate balance between adaptivity and robustness to confounding and distribution shifts. It attains asymptotic lower bounds on the number of samples required to confidently identify the best action --- suggesting optimal adaptivity --- but also satisfies strong performance guarantees in the presence of day-of-week effects and delayed observations --- suggesting unusual robustness.
1 Reply