Label Noise-Robust Learning using a Confidence-Based Sieving Strategy
Abstract: In learning tasks with label noise, improving model robustness against overfitting is a pivotal challenge because the model eventually memorizes labels, including the noisy ones. Identifying the samples with noisy labels and preventing the model from learning them is a promising approach to address this challenge. When training with noisy labels, the per-class confidence scores of the model, represented by the class probabilities, can be reliable criteria for assessing whether the input label is the true label or the corrupted one. In this work, we exploit this observation and propose a novel discriminator metric called confidence error and a sieving strategy called CONFES to differentiate between the clean and noisy samples effectively. We provide theoretical guarantees on the probability of error for our proposed metric. Then, we experimentally illustrate the superior performance of our proposed approach compared to recent studies on various settings, such as synthetic and real-world label noise. Moreover, we show CONFES can be combined with other state-of-the-art approaches, such as Co-teaching and DivideMix to further improve model performance.
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Length: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=QptGQOnRwl&referrer=%5BTMLR%5D(%2Fgroup%3Fid%3DTMLR)
Changes Since Last Submission: Here is the list of the main modifications made to the manuscript compared to the previous submission (#679) according to comments from the reviewers and action editor): - Providing theoretical analysis on the probability of error for the proposed confidence error metric (subsection 3.3, pages 4-6) - Including additional empirical evaluations to support the theoretical analysis and to compare the proposed algorithm with the closest related work (“CONFES vs. LRT” paragraph on page 13, and Figure 9 on page 18) - Refining the writing to ensure our claims are consistent with both theoretical analysis and experimental evaluations (e.g. claims regarding small-loss trick)
Assigned Action Editor: ~Matthew_Blaschko1
Submission Number: 1227