A Simple yet Effective Baseline for Robust Deep Learning with Noisy LabelsDownload PDF

25 Mar 2019 (modified: 22 Oct 2023)Submitted to LLD 2019Readers: Everyone
Keywords: Learning with noisy labels, generalization of deep neural networks, robust deep learning
TL;DR: The paper proposed a simple yet effective baseline for learning with noisy labels.
Abstract: Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance. To mitigate this issue, we propose a simple but effective method that is robust to noisy labels, even with severe noise. Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the neural network on the whole training set (including the noisy-labeled data), which encourages generalization and prevents overfitting to the corrupted labels. Experiments on noisy benchmarks demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1909.09338/code)
3 Replies

Loading