Combating noisy labels with stochastic noise-tolerated supervised contrastive learningDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Learning with noisy labels (LNL) aims to achieve good generalization performance given a label-corrupted training set. In this work, we consider a more challenging situation of LNL on \emph{fine-grained} datasets (LNL-FG). Due to large inter-class ambiguity among those fine-grained classes, deep models are more prone to overfitting to noisy labels, leading to poor generalization performance. To handle this problem, we propose a novel framework called stochastic noise-tolerated supervised contrastive learning (SNSCL) that can enhance discriminability of deep models. Specifically, SNSCL contains a noise-tolerated contrastive loss and a stochastic module. To play against fitting noisy labels, we design a noise-tolerated supervised contrastive learning loss that incorporates a weight-aware mechanism for noisy label correction and selectively updating momentum queue lists. By this mechanism, SCL mitigates the effects of noisy anchors and avoids inserting noisy labels into the momentum-updated queue. Besides, to avoid manually-defined augmentation strategies in SCL, we propose an efficient stochastic module that samples feature embeddings from a generated distribution, which can also enhance the representation ability of SCL. Our proposed SNSCL is general and compatible with prevailing robust LNL strategies to improve their performance for LNL-FG. Extensive experiments on four noisy benchmarks and an open-world dataset with variant noise ratios demonstrate that our proposed framework significantly improves the performance of current LNL methods for LNL-FG.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
6 Replies

Loading