DISCRET: a self-interpretable framework for treatment effect estimation

22 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: treatment effect estimation, interpretability
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Individual treatment effect is of great importance for healthcare and beyond. While most existing solutions focus on accurate treatment effect estimations, they rely on non-interpretable black-box models that can hinder stakeholders from understanding the underlying factors driving the prediction. To address this issue, we propose DISCRET, a self-interpretable framework that is inspired by how stake- holders make critical decisions in practice. DISCRET identifies samples similar to a target sample from a database by using interpretable rules and employs their treatment effect as the estimated ITE for the target sample. We present a deep reinforcement learning-based rule learning algorithm in DISCRET to achieve accurate ITE estimation. We conduct extensive experiments over tabular, natural language, and image settings. Our evaluation shows that DISCRET not only achieves comparable performance as black-box models but also generates more faithful explanations than state-of-the-art post-hoc methods and self-interpretable models.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6022
Loading