Mixup to the Random Extreme and Its Performances in Robust Image Classification

TMLR Paper994 Authors

24 Mar 2023 (modified: 17 Sept 2024)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We introduce RandomMix, an inexpensive yet effective method for data augmentation that combines interpolation-based training and negative weights sampling scheme. Rather than training with a unifying mixup policy for combinations of pairs of examples and their labels, we design a separate mixup rule for pairs of data points. This method naturally combines the previous advantages of previous mixup methods, including Mixup Zhang et al. (2017), CutMix Yun et al. (2019), but it has its own advantages, including a relatively fast training efficiency without introducing any extra components or parameter, strong robustness performance withstands unseen data distributions. We provide empirical results to demonstrate this method. Our experiments on a range of computer vision benchmark datasets show that RandomMix stays comparable to other popular Mixup methods on accuracy and outperforms other methods on robustness while showing advantages in efficiency.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Simon_Kornblith1
Submission Number: 994
Loading