Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: data augmentation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Data augmentation is a dominant method for reducing model overfitting and improving generalization.
Most existing data augmentation methods tend to find a compromise in augmenting the data, \ie, increasing the amplitude of augmentation carefully to avoid degrading some data too much and doing harm to the model performance.
We delve into the relationship between data augmentation and model performance, revealing that the performance drop with heavy augmentation comes from the presence of out-of-distribution (OOD) data.
Nonetheless, as the same data transformation has different effects for different training samples, even for heavy augmentation, there remains part of in-distribution data which is beneficial to model training.
Based on the observation, we propose a novel data augmentation method, named **DualAug**, to keep the augmentation in distribution as much as possible at a reasonable time and computational cost.
We design a data mixing strategy to fuse augmented data from both the basic- and the heavy-augmentation branches.
Extensive experiments on supervised image classification benchmarks show that DualAug improve various automated data augmentation method.
Moreover, the experiments on semi-supervised learning and contrastive self-supervised learning demonstrate that our DualAug can also improve related method.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3460
Loading