On Noise Abduction for Answering Counterfactual Queries: A Practical Outlook

Published: 24 Oct 2022, Last Modified: 28 Feb 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: A crucial step in counterfactual inference is abduction - inference of the exogenous noise variables. Deep Learning approaches model an exogenous noise variable as a latent variable. Our ability to infer a latent variable comes at a computational cost as well as a statistical cost. In this paper, we show that it may not be necessary to abduct all the noise variables in a structural causal model (SCM) to answer a counterfactual query. In a fully specified causal model with no unobserved confounding, we also identify exogenous noises that must be abducted for a counterfactual query. We introduce a graphical condition for noise identification from an action consisting of an arbitrary combination of hard and soft interventions. We report experimental results on both synthetic and real-world German Credit Dataset showcasing the promise and usefulness of the proposed exogenous noise identification.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=3xtnz5jyrs
Changes Since Last Submission: Camera-ready revision: 1) We have changed definition 2 as per AE's suggestion. 2) De-anonymized and post-review changes are no longer in red. 3) Citations, references, and links are in blue.
Code: https://github.com/Saptarshi-Saha-1996/Noise-Abduction-for-Counterfactuals
Assigned Action Editor: ~bo_han2
Submission Number: 331