Keywords: invariant learning, out-of-distribution generalisation, OOD
TL;DR: We obtain invariant predictors by optimizing the features to ellicit the same non-parametric predictor across different environments.
Abstract: Invariant learning methods aim to obtain robust features that can be used in the same way in multiple environments and can generalize out-of-distribution. This paper introduces a novel method to achieve this, called Invariant KNN. We are guided by the idea that robust features should elicit an invariant non-parametric predictor across domains. For this, we create a K-nearest neighbors predictor from each training environment and constrain them to be the same. We experimentally prove that this approach leads to invariant predictors which learn to use the robust features in the data and generalize out-of-distribution. We test our algorithm on a simple but popular benchmark and demonstrate that it is both competitive with other popular algorithms as well as less sensitive to hyperparameter selection.
Submission Number: 74
Loading