Adversarial Data Augmentations for Out-of-Distribution Generalization

ICML 2023 Workshop SCIS Submission40 Authors

Published: 20 Jun 2023, Last Modified: 28 Jul 2023SCIS 2023 PosterEveryoneRevisions
Keywords: out of distribution generalization, robustness, data augmentations, ERM collapse
TL;DR: A plug-in optimization layer for out-of-distribution generalization that learns a distribution of data augmentations designed to prevent collapse to the empirical risk minimization (ERM) solution and generate more diverse environments.
Abstract: Out-of-distribution (OoD) generalization occurs when representation learning encounters a distribution shift. This occurs frequently in practice when training and testing data come from different environments. Covariate shift is a type of distribution shift that occurs only in the input data, while the concept distribution stays invariant. We propose RIA - Regularization for Invariance with Adversarial training, a new method for OoD generalization under convariate shift, that performs an adversarial search for training data environments. These new environments are induced by adversarial data augmentations that prevent a collapse to an in-distribution trained learner. It works with many existing OoD generalization methods for covariate shift that can be formulated as constrained optimization problems. We develop an alternating gradient descent-ascent algorithm to solve the problem, and perform extensive experiments on OoD graph classification for various kinds of synthetic and natural distribution shifts. We demonstrate that our method can achieve high accuracy compared with OoD baselines.
Submission Number: 40
Loading