TransformMix: Learning Transformation and Mixing Strategies for Sample-mixing Data AugmentationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Data Augmentation, Automated Data Augmentation, Sample-mixing, Computer Vision
TL;DR: We propose an automated approach, TransformMix, to learn better transformation and mixing augmentation strategies from data
Abstract: Data augmentation improves the generalization power of deep learning models by synthesizing more training samples. Sample-mixing is a popular data augmentation approach that creates additional training samples by combining existing images. Recent sample-mixing methods, like Mixup and Cutmix, adopt simple mixing operations to blend multiple input images. Although such a heuristic approach shows certain performance gains in some computer vision tasks, it mixes the images blindly and does not adapt to different datasets automatically. A mixing strategy that is effective for a particular dataset does not often generalize well to other datasets. If not properly configured, the methods may create misleading mixed images, which jeopardize the effectiveness of sample-mixing augmentations. In this work, we propose an automated approach, TransformMix, to learn better transformation and mixing augmentation strategies from data. In particular, TransformMix applies learned transformations and mixing masks to create compelling mixed images that contain correct and important information for the target tasks. We demonstrate the effectiveness of TransformMix in multiple datasets under the direct and transfer settings. Experimental results show that our method achieves better top-1 and top-5 accuracy as well as efficiency when compared with strong sample-mixing baselines.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
12 Replies

Loading