Abstract: Machine learning methods can be unreliable when deployed in domains that differ from the domains on which they were trained. To address this, we may wish to learn representations of data that are domain-invariant in the sense that we preserve data structure that is stable across domains, but throw out spuriously-varying parts. There are many representation-learning approaches of this type, including methods based on data augmentation, distributional invariances, and risk invariance. Unfortunately, when faced with any particular real-world domain shift, it is unclear which, if any, of these methods might be expected to work. The purpose of this paper is to show how the different methods relate to each other, and clarify the real-world circumstances under which each is expected to succeed. The key tool is a new notion of domain shift relying on the idea that causal relationships are invariant, but non-causal relationships (e.g., due to confounding) may vary. Considering this type of domain shift, a natural goal is to learn representations that are ``Counterfactually Invariant''. We find the popular domain-invariant representation learning methods enforce invariance that corresponds to the Counterfactual Invariance under different types of causal structures. Therefore, we should pick the method that matches the underlying causal structure.
Confirmation: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/a-unified-causal-view-of-domain-invariant/code)
0 Replies
Loading