Keywords: Contextual bandits, Interference, Online policy optimization, Causal Inference, Statistical inference, Regret bound
Abstract: Contextual bandits, which leverage baseline features of sequentially arriving individuals to optimize cumulative rewards while balancing exploration and exploitation, are critical for online decision-making. Existing approaches typically assume no interference, where each individual’s action affects only their own reward. Yet, such an assumption can be violated in many practical scenarios, and the oversight of interference can lead to short-sighted policies that focus solely on maximizing the immediate outcomes for individuals, which further results in suboptimal decisions and potentially increased regret over time. To address this significant gap, we introduce the \underline{f}o\underline{r}esighted \underline{o}nline policy with i\underline{nt}erference (FRONT) that innovatively considers the long-term impact of the current decision on subsequent decisions and rewards. The proposed FRONT method employs a sequence of exploratory and exploitative strategies to manage the intricacies of interference, ensuring robust parameter inference and regret minimization. Theoretically, we establish the tail bound of the online estimation and derive the asymptotic distribution of parameters of interest. We further show how FRONT manages to maintain sublinear regret under two different definitions concerning interference, accounting for both immediate and consequential impacts of decisions. The effectiveness of FRONT is well demonstrated through extensive simulations and a real-world application to urban hotel profits.
Submission Number: 1
Loading