Confidence Adaptive Regularization for Deep Learning with Noisy LabelsDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: noisy labels, regularization, label correction
Abstract: Recent studies on the memorization effects of deep neural networks on noisy labels show that the networks first fit the correctly labeled training samples before memorizing the mislabeled samples. Motivated by this early-learning phenomenon, we propose a novel method to prevent memorization of the mislabeled samples. Unlike the existing approaches which use confidence (captured by winning score from model prediction) to identify or ignore the mislabeled samples, we introduce an indicator branch to the original model and enable the model to produce a new confidence (i.e. indicates whether a sample is clean or mislabeled) for each sample. The confidence values are incorporated in the proposed loss function which is learned to assign large values to correctly-labeled samples and small values to mislabeled ones. We also discuss the limitation of our approach and propose an auxiliary regularization term to enhance the robustness of the model in challenging cases. Our empirical analysis shows that the model predicts correctly for both clean and mislabeled samples in the early learning phase. Based on the predictions in each iteration, we correct the noisy labels to steer the model towards corrected targets. Further, we provide the theoretical analysis and conduct numerous experiments on synthetic and real-world datasets, demonstrating that our approach achieves comparable and even better results to the state-of-the-art methods.
One-sentence Summary: An approach that used to mitigate the negative influence of when training deep neural networks with noisy labels
Supplementary Material: zip
5 Replies

Loading