A Simple yet Effective Baseline for Robust Deep Learning with Noisy Labels

Mar 25, 2019 ICLR 2019 Workshop LLD Blind Submission readers: everyone
  • Keywords: Learning with noisy labels, generalization of deep neural networks, robust deep learning
  • TL;DR: The paper proposed a simple yet effective baseline for learning with noisy labels.
  • Abstract: Recently deep neural networks have shown their capacity to memorize training data, even with noisy labels, which hurts generalization performance. To mitigate this issue, we propose a simple but effective method that is robust to noisy labels, even with severe noise. Our objective involves a variance regularization term that implicitly penalizes the Jacobian norm of the neural network on the whole training set (including the noisy-labeled data), which encourages generalization and prevents overfitting to the corrupted labels. Experiments on noisy benchmarks demonstrate that our approach achieves state-of-the-art performance with a high tolerance to severe noise.
0 Replies