Keywords: Counterfactual Inference, Markov Decision Processes
Abstract: Our work addresses a fundamental problem in the context of counterfactual inference for Markov Decision Processes (MDPs). Given an MDP path $\tau$, counterfactual inference allows us to derive counterfactual paths $\tau'$ describing _what-if_ versions of $\tau$ obtained under different action sequences than those observed in $\tau$. However, as the counterfactual states and actions deviate from the observed ones over time, _the observation $\tau$ may no longer influence the counterfactual world_, meaning that the analysis is no longer tailored to the individual observation, resulting in interventional outcomes rather than counterfactual ones. This issue specifically affects the popular Gumbel-max structural causal model used for MDP counterfactuals, and yet, it has remained overlooked until now. In this work, we introduce a formal characterisation of influence based on comparing counterfactual and interventional distributions. We devise an algorithm to construct counterfactual models that automatically satisfy influence constraints. Leveraging such models, we derive counterfactual policies that are not just optimal for a given reward structure but also remain tailored to the observed path. Even though there is an unavoidable trade-off between policy optimality and strength of influence constraints, our experiments demonstrate that it is possible to derive (near-)optimal policies while remaining under the influence of the observation.
Supplementary Material: zip
Publication Agreement: pdf
Submission Number: 54
Loading