Calibrated and Conformal Propensity Scores for Causal Effect Estimation

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Causal Inference, Conformal Prediction, Propensity Scores, Uncertainties, Calibration, Causal Machine Learning
Abstract: Propensity scores are commonly used to balance observed covariates while estimating treatment effects. We argue that the probabilistic output of a learned propensity score model should be calibrated, i.e. a predictive treatment probability of 90% should correspond to 90% of individuals being assigned the treatment group. We propose simple recalibration techniques to ensure this property. We prove that calibration is a necessary condition for unbiased treatment effect estimation when using popular inverse propensity weighted and doubly robust estimators. We derive error bounds on causal effect estimates that directly relate to the quality of uncertainties provided by the probabilistic propensity score model and show that calibration strictly improves this error bound while also avoiding extreme propensity weights. We demonstrate improved causal effect estimation with calibrated propensity scores in several tasks including high-dimensional image covariates and genome-wide association studies (GWASs). Calibrated propensity scores improve the speed of GWAS analysis by more than two-fold by enabling the use of simpler models that are faster to train.
List Of Authors: Deshpande, Shachi and Kuleshov, Volodymyr
Latex Source Code: zip
Signed License Agreement: pdf
Submission Number: 90
Loading