AMA: Asymptotic Midpoint Augmentation for Margin Balancing and Moderate BroadeningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Margin plays an important role like alignment and uniformity for regularization, as shown in contrastive learning literature. However, feature augmentation has been rarely analyzed in this framework, despite their effective regularization. In this paper, we focus on the analysis framework for feature augmentations and propose a novel method to gradually push a decision boundary to the midpoint of related representations via their augmentation, called $\textit{asymptotic midpoint augmentation}$ (AMA). The method induces two effects: 1) balancing the margin for all classes and 2) only moderately broadening the margin until it holds maximal confidence. Each effect addresses the low uniformity of feature augmentations and representation collapse by excessively low alignment of contrastive learning, respectively. We empirically analyze the effects in a toy task for clear visualization and validate the impacts in original, long-tailed, and coarse-to-fine transfer tasks on CIFAR-10 and CIFAR-100.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
4 Replies

Loading