Invariant and Transportable Representations for Anti-Causal Domain ShiftsDownload PDF

Published: 21 Jul 2022, Last Modified: 25 Nov 2024SCIS 2022 PosterReaders: Everyone
Keywords: causality, spurious correlation, invariant prediction, domain shift, image classification, domain adaptation
TL;DR: Formalize anti-causal domain shifts and leverage causal assumptions to learn invariant and transportable representations.
Abstract: Real-world classification problems must contend with domain shift, the (potential) mismatch between the domain where a model is deployed and the domain(s) where the training data was gathered. Methods to handle such problems must specify what structure is common between the domains and what varies. A natural assumption is that causal (structural) relationships are invariant in all domains. Then, it is tempting to learn a predictor for label $Y$ that depends only on its causal parents. However, many real-world problems are ``anti-causal'' in the sense that $Y$ is a cause of the covariates $X$---in this case, $Y$ has no causal parents and the naive causal invariance is useless. In this paper, we study representation learning under a particular notion of domain shift that both respects causal invariance and that naturally handles the ``anti-causal'' structure. We show how to leverage the shared causal structure of the domains to learn a representation that both admits an invariant predictor and that also allows fast adaptation in new domains. The key is to translate causal assumptions into learning principles that disentangle ``invariant'' and ``non-stable'' features. Experiments on both synthetic and real-world data demonstrate the effectiveness of the proposed learning algorithm. Full paper is available at \url{https://arxiv.org/abs/2207.01603}.
Confirmation: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/invariant-and-transportable-representations/code)
0 Replies

Loading