Keywords: Causal Bandits, Causality, Causal Inference, Simple Regret, Contextual Bandits, Causal Contextual Bandits, Convex Exploration, Intervention Complexity, Simple Regret, Simple regret lower bound
TL;DR: We propose a near-optimal algorithm for simple regret in causal contextual bandits where the context is stochastically dependent on an initial action chosen by the learner.
Abstract: We study a variant of causal contextual bandits where the context is chosen based on an initial intervention chosen by the learner. At the beginning of each round, the learner selects an initial action, depending on which a stochastic context is revealed by the environment. Following this, the learner then selects a final action and receives a reward. Given $T$ rounds of interactions with the environment, the objective of the learner is to learn a policy (of selecting the initial and the final action) with maximum expected reward. In this paper we study the specific situation where every action corresponds to intervening on a node in some known causal graph. We extend prior work from the deterministic context setting to obtain simple regret minimization guarantees. This is achieved through an instance-dependent causal parameter, $\lambda$, which characterizes our upper bound. Furthermore, we prove that our simple regret is essentially tight for a large class of instances. A key feature of our work is that we use convex optimization to address the bandit exploration problem. We also conduct experiments to validate our theoretical results, and release our code at [github.com/adaptiveContextualCausalBandits/aCCB](https://github.com/adaptiveContextualCausalBandits/aCCB).
Submission Number: 319
Loading