Keywords: Causal inference, effect estimation, selective prediction
TL;DR: We present BICauseTree: an interpretable balancing method that identifies clusters where natural experiments occur locally, using custom optimization over decision trees to reduce treatment allocation bias.
Abstract: Causal effect estimation from observational data is an important analytical approach for data-driven policy-making. However, due to the inherent lack of ground truth in causal inference accepting such recommendations requires transparency and explainability. To date, attempts at transparent causal effect estimation consist of applying post hoc explanation methods to black-box models, which are not interpretable. In this manuscript, we present BICauseTree: an interpretable balancing method that identifies clusters where natural experiments occur locally. Our approach builds on decision trees to reduce treatment allocation bias. As a result, we can define subpopulations presenting positivity violations and exclude them while providing a covariate-based definition of the target population we can infer from. We characterize the method's performance using synthetic and realistic datasets, explore its bias-interpretability tradeoff, and show that it is comparable with existing approaches.
Supplementary Material: zip
Submission Number: 2894
Loading