Jump-teaching: Ultra Robust and Efficient Learning with Noisy Labels

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: learning with noisy labels, machine learning, classification
TL;DR: A novel framework for combating noisy labels.
Abstract: Sample selection is the most straightforward technique to combat noisy labels, aiming to prevent mislabeled samples from degrading the robustness of neural networks. However, compounding selection bias and redundant selection operations have always remained challenging in robustness and efficiency. To mitigate selection bias, existing methods utilize disagreement in partner networks or additional forward propagation in a single network. For selection operations, they involve dataset-wise modeling or batch-wise ranking. Any of the above methods yields sub-optimal performance. In this work, we propose $\textit{Jump-teaching}$, a novel framework for optimizing the typical workflow of sample selection. Firstly, Jump-teaching is the $\textit{first}$ work to discover significant disagreements within a single network between different training iterations. Based on this discovery, we propose a jump-manner strategy for model updating to bridge the disagreements. We further illustrate its effectiveness from the perspective of error flow. Secondly, Jump-teaching designs a lightweight plugin to simplify selection operations. It creates a detailed yet simple loss distribution on an auxiliary encoding space, which helps select clean samples more effectively. In the experiments, Jump-teaching not only outperforms state-of-the-art works in terms of robustness, but also reduces peak memory usage by $0.46\times$ and boosts training speed by up to $2.53\times$. Notably, existing methods can also benefit from the integration with our framework.
Supplementary Material: pdf
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10552
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview