Repeated Environment Inference for Invariant LearningDownload PDF

Published: 21 Jul 2022, Last Modified: 22 Oct 2023SCIS 2022 PosterReaders: Everyone
Keywords: Invariant Learning, Environment Inference
TL;DR: We repeat the Environment Inference process by training ERM models on majority environments to find better environment labels, facilitating better Invariant Learning when environment labels are unknown.
Abstract: We study the problem of invariant learning when the environment labels are unknown. We focus on the Invariant representation notion when the Bayes optimal conditional label distribution is the same across different environments. Previous work conducts the Environment Inference (EI) by maximizing the penalty term in the Invariant Risk Minimization (IRM) framework. The EI step uses a reference model which focuses on spurious correlations to efficiently reach a good environment partition. However, it is not clear how to find such a reference model. In this work, we propose to repeat the EI process and retrain an ERM model on the \textit{majority} environment inferred by the EI step in the previous step. Under mild assumptions, we find that this iterative process helps learn a representation capturing the spurious correlation better than the single step. This results in better Environment Inference and better Invariant Learning. We show that this method outperforms baselines on both synthetic and real-world datasets.
Confirmation: Yes
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2207.12876/code)
0 Replies

Loading