MixupE: Understanding and Improving Mixup from Directional Derivative PerspectiveDownload PDF

Published: 08 May 2023, Last Modified: 03 Nov 2024UAI 2023Readers: Everyone
Keywords: Mixup, generalization, data augmentation, regularization
TL;DR: We propose a theory-driven improvement of Mixup, which is theoretically and empirically validated to be effective.
Abstract: Mixup is a popular data augmentation technique for training deep neural networks where additional samples are generated by linearly interpolating pairs of inputs and their labels. This technique is known to improve the generalization performance in many learning paradigms and applications. In this work, we first analyze Mixup and show that it implicitly regularizes infinitely many directional derivatives of all orders. Based on this new insight, we propose an improved version of Mixup, theoretically justified to deliver better generalization performance than the vanilla Mixup. To demonstrate the effectiveness of the proposed method, we conduct experiments across various domains such as images, tabular data, speech, and graphs. Our results show that the proposed method improves Mixup across various datasets using a variety of architectures, for instance, exhibiting an improvement over Mixup by 0.8% in ImageNet top-1 accuracy.
Supplementary Material: pdf
Other Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/mixupe-understanding-and-improving-mixup-from/code)
0 Replies

Loading