Variation-Bounded Losses for Learning with Noisy Labels

ICLR 2025 Conference Submission835 Authors

15 Sept 2024 (modified: 27 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Learning with Noisy Labels; Robust Loss Functions; Multi-Class Classification
TL;DR: We introduce a novel metric to measure the robustness of loss functions, and propose a new family of robust loss functions.
Abstract: The presence of noisy labels poses a significant challenge for training accurate deep neural networks. Previous works have proposed various robust loss functions designed to address this issue, which, however, often suffer from several drawbacks, such as underfitting or insufficient noise-tolerance. Furthermore, there is currently no reliable metric to guide the design of more effective robust loss functions. In this paper, we introduce the *Variation Ratio* as a novel metric to measure the robustness of loss functions. Leveraging this metric, we propose a new family of robust loss functions, termed *Variation-Bounded Losses* (VBL), characterized by a bounded variation ratio. We investigate theoretical properties of variation-bounded losses and prove that a smaller variation ratio would lead to better robustness. Additionally, we show that the variation ratio provides a more relaxed condition than the commonly used symmetric condition for achieving noise-tolerant learning, making it a valuable tool for designing effective robust loss functions. We modify several commonly used loss functions to the variation-bounded form. These variation-bounded losses are characterized by their simplicity, effectiveness, and theoretical guarantees. Extensive experiments demonstrate the superiority of our method in mitigating various types of label noise.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 835
Loading