Keywords: causality, interpretability, causal inference, KAN, treatment effects, potential outcomes
TL;DR: We propose causalKANs, a framework that transforms neural-based causal estimators into KAN-based estimators that are interpretable by construction.
Abstract: Deep neural networks achieve state-of-the-art performance in estimating heterogeneous treatment effects, but their opacity limits trust and adoption in sensitive domains such as medicine, economics and public policy. Building on well-established and high-performing causal neural architectures, we propose _causalKANs_, a framework that transforms neural estimators of _conditional average treatment effects_ (CATEs) into Kolmogorov–Arnold Networks (KANs). By incorporating pruning and symbolic simplification, causalKANs yields interpretable closed-form formulas while preserving predictive accuracy. Experiments on benchmark datasets demonstrate that causalKANs perform on par with neural baselines in CATE error metrics, and that even simple KAN variants achieve competitive performance, offering a favorable accuracy–interpretability trade-off. By combining reliability with analytic accessibility, causalKANs provide auditable estimators supported by closed-form expressions and interpretable plots, enabling trustworthy individualized decision-making in high-stakes settings. We release the code for reproducibility.
Supplementary Material: zip
Primary Area: causal reasoning
Submission Number: 11455
Loading