Selective Mixup Helps with Distribution Shifts, But Not (Only) because of Mixup

25 Apr 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: mixup, distribution shifts, OOD generalization, weighted training
TL;DR: Selective mixup (a family of methods very successful at improving out-of-distribution generalization) is sometimes equivalent to weighted sampling, a classical baseline for handling covariate and label shift.
Abstract:

Mixup is a highly successful technique to improve generalization of neural networks by augmenting the training data with combinations of random pairs. Selective mixup is a family of methods that apply mixup to specific pairs, e.g. only combining examples across classes or domains. These methods have claimed remarkable improvements on benchmarks with distribution shifts, but their mechanisms and limitations remain poorly understood.

We examine an overlooked aspect of selective mixup that explains its success in a completely new light. We find that the non-random selection of pairs affects the training distribution and improve generalization by means completely unrelated to the mixing. For example in binary classification, mixup across classes implicitly resamples the data for a uniform class distribution - a classical solution to label shift. We show empirically that this implicit resampling explains much of the improvements in prior work. Theoretically, these results rely on a "regression toward the mean", an accidental property that we identify in several datasets.

Takeaways: We have found a new equivalence between two successful methods: selective mixup and resampling. We identify limits of the former, confirm the effectiveness of the latter, and find better combinations of their respective benefits.

Supplementary Material: pdf
Submission Number: 717
Loading