Keywords: Causal, Inference, Generalizability, Validation, Testing
TL;DR: We propose a statistical framework for evaluating out-of-domain generalizability for causal inference algorithms.
Abstract: Ensuring robust model performance in diverse real-world scenarios requires addressing generalizability across domains with covariate shifts. However, no formal procedure exists for statistically evaluating generalizability in machine learning algorithms. Existing methods often rely on arbitrary proxy predictive metrics like mean squared error, but do not directly answer whether a model can or cannot generalize. To address this gap in the domain of causal inference, we propose a systematic framework for statistically evaluating the generalizability of high-dimensional causal inference models. Our approach uses the frugal parameterization to flexibly simulate from fully and semi-synthetic causal benchmarks, offering a comprehensive evaluation for both mean and distributional regression methods. Grounded in real-world data, our method ensures more realistic evaluations, which is often missing in current work relying on simplified datasets. Furthermore, using simulations and statistical testing, our framework is robust and avoids over-reliance on conventional metrics, providing statistical safeguards for decision making.
Supplementary Material: zip
Latex Source Code: zip
Code Link: https://github.com/rje42/DomainChange
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission393/Authors, auai.org/UAI/2025/Conference/Submission393/Reproducibility_Reviewers
Submission Number: 393
Loading