Keywords: causal inference, distribution shift, robustness, diffusion model, reinforcement learning
TL;DR: We propose a model-agnostic framework, Causal Adversarial Reinforcement-guided Diffusion (CARD), that uses a minimax game to train any CATE learner, improving the CATE learner's robustness against unknown distribution shifts.
Abstract: Estimating the Conditional Average Treatment Effect (CATE) is essential to personalized decision-making in causal inference. However, in real-world practices, CATE models often suffer degraded performance when faced with unknown distribution shifts between training and deployment environments. To tackle this challenge, we introduce **C**ausal **A**dversarial **R**einforcement-guided **D**iffusion **(CARD)**, a model-agnostic framework that can be wrapped around any existing CATE learner to improve its robustness against unknown distribution shifts. CARD formulates the CATE modeling process as a minimax game: a reinforcement learning agent guides a diffusion model to generate adversarial data augmentations that maximize the CATE learner's loss, and then the learner is trained to minimize this worst-case loss, creating a principled robust optimization procedure. The comprehensive experimental results demonstrate that CARD consistently improves the robustness of diverse CATE learners against challenging data corruptions, including measurement error, missing values, and unmeasured confounding, confirming its broad applicability and effectiveness.
Supplementary Material: zip
Primary Area: causal reasoning
Submission Number: 18947
Loading