Weighted Risk Invariance: Domain Generalization under Invariant Feature Shift

Published: 27 Jul 2024, Last Modified: 27 Jul 2024Accepted by TMLREveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Learning models whose predictions are invariant under multiple environments is a promising approach for out-of-distribution generalization. Such models are trained to extract features $X_{\text{inv}}$ where the conditional distribution $Y \mid X_{\text{inv}}$ of the label given the extracted features does not change across environments. Invariant models are also supposed to generalize to shifts in the marginal distribution $p(X_{\text{inv}})$ of the extracted features $X_{\text{inv}}$, a type of shift we call an invariant covariate shift. However, we show that proposed methods for learning invariant models underperform under invariant covariate shift, either failing to learn invariant models---even for data generated from simple and well-studied linear-Gaussian models---or having poor finite-sample performance. To alleviate these problems, we propose weighted risk invariance (WRI). Our framework is based on imposing invariance of the loss across environments subject to appropriate reweightings of the training examples. We show that WRI provably learns invariant models, i.e. discards spurious correlations, in linear-Gaussian settings. We propose a practical algorithm to implement WRI by learning the density $p(X_{\text{inv}})$ and the model parameters simultaneously, and we demonstrate empirically that WRI outperforms previous invariant learning methods under invariant covariate shift.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/ginawong/weighted_risk_invariance/
Assigned Action Editor: ~Yaoliang_Yu1
Submission Number: 2423
Loading