Progressively Robust Loss for Deep Learning with Noisy Labels

Published: 2024, Last Modified: 11 Nov 2024IJCNN 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Learning with noisy labels (LNL) plays a pivotal role in arming deep neural networks (DNNs) to combat label noise. Early noise-robust functions tend to promote robustness against noisy labels at the cost of sacrificing data-fitting ability. Recent robust loss methods typically try to balance noise-robustness and learning capability. However, most of them generally descend to partially robust losses, which are still exposed to the risk of overfitting noisy labels. To this end, we propose a novel paradigm named progressively robust loss framework to dynamically guide existing noise-robust losses from fast convergence to noise-tolerant, which is in accord with the deep models’ memorization effect. Furthermore, our theoretical analysis of the upper bounds of empirical risk errors illustrates the increasing noise-robustness of our approach. Experimental results on two synthetic benchmarks (CIFAR-100N and CIFAR-80N) and two real-world noisy datasets (WebFG-496 and Webvision) demonstrate the superiority of our approach over state-of-the-art robust loss methods in dealing with noisy labels. The code is available at https://github.com/ptcepgce/ptcepgce.
Loading