Evaluating and Learning Robust Bandit Policies Under Uncertain Causal Mechanisms

Published: 10 Mar 2026, Last Modified: 07 Apr 2026CLeaR 2026 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: causal bandits, distributional robustness, structural equation models
TL;DR: Reasoning over possible structural equations improves robust bandit policy learning and evaluation when you have background knowledge.
Abstract: Causal graphical models can encode large amounts structural knowledge, both from the background knowledge of domain experts and the structural knowledge discovered from randomized experiments or observational data. However, though we may know the general structure of causal relationships, we often do not know the exact causal mechanisms. In this work, we propose a causal multi-armed bandit evaluation and learning algorithm that can reason effectively despite uncertainty over conditional probability distributions. Further, we show how conditional independence testing can be used to choose variables for modeling. We find that the structural equation model (SEM) approach gives more accurate evaluations compared to traditional approaches, particularly as the range of possible causal mechanisms grows. Further, the SEM approach learns low-variance policies, and it learns an optimal policy, assuming the model is sufficiently well-specified. Traditional approaches can converge to local extrema or fail to converge at all.
Pmlr Agreement: pdf
Submission Number: 31
Loading