Evaluating Counterfactual Data Augmentation in Reinforcement Learning

Published: 24 Apr 2026, Last Modified: 24 Apr 2026CauScale 2026EveryoneRevisionsCC BY 4.0
Keywords: Causal Reinforcement Learning, Counterfactual Data Augmentation, Offline Reinforcement Learning, Structural Causal Models, Robustness
TL;DR: This work presents a ground-up reproduction of the CTRL framework, evaluating counterfactual data augmentation via a multi-factor validation matrix in CartPole-SD and extending external-validity testing to LunarLander, MuJoCo, and D4RL environments.
Abstract: We present a verified, open-source reimplementation and extension of CTRL, a causal reinforcement learning method using counterfactual data augmentation. Through a validation matrix across diverse datasets (CartPole, LunarLander, MuJoCo, D4RL), we show that counterfactual augmentation is conditionally useful rather than uniformly superior, with reliability depending on generator fidelity, data regime and and evaluation protocol. By comparing against a non-causal world model, we identify a critical "coverage-versus-bias" tradeoff where excessive augmentation amplifies transition inaccuracies. Finally, we fill a significant gap in the community by providing a verified, ground-up open-source implementation of the CTRL architecture to facilitate further research in causal RL.
Submission Number: 23
Loading