Variational Counterfactual Prediction under Runtime Domain CorruptionDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Causal inference, treatment effect estimation, deep learning, variational inference, domain adaptation, domain shift, covariate shift, privacy concern.
TL;DR: Proposed an upper bound and an adversarially unified variational method for run-time causal inference.
Abstract: Causal inference is pivotal to decision-making as it can estimate the potential treatment effect before the decision is actually made on an individual or a population, and its applications are seen in high-stake areas such as healthcare, education, and e-commerce. So far, various neural methods have been proposed to address the causal effect estimation based on observational data, where the counterfactual prediction commonly assumes the same availability of variables at both training and inference (i.e., runtime) stages. In reality, the accessibility of variables is usually impaired due mainly to privacy and ethical concerns, like medical records. We term this \textit{runtime domain corruption}, which seriously challenges the generalizability of the trained counterfactual predictor on top of the existence of confoundedness and selection bias. To counteract runtime domain corruption, we subsume counterfactual prediction under the notion of domain adaptation. Specifically, we upper-bound the error w.r.t. the target domain (i.e., runtime covariates) by the sum of source domain error and inter-domain distribution distance. In addition, we build an adversarially unified variational causal effect model, named VEGAN, with a novel two-stage adversarial domain adaptation to implicitly reduce the distribution disparity between treated and control groups first, and between training and inference domains afterwards. With extensive experiments on causal inference benchmark datasets, we demonstrate that our proposed method outperforms other state-of-the-art baselines on individual-level causal effect estimation in the presence of runtime domain corruption.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Probabilistic Methods (eg, variational inference, causal inference, Gaussian processes)
12 Replies

Loading