IEG: Robust neural net training with severe label noisesDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Collecting large-scale data with clean labels for supervised training of neural networks is practically challenging. Although noisy labels are usually cheap to acquire, existing methods suffer severely for training datasets with high noise ratios, making high-cost human labeling a necessity. Here we present a method to train neural networks in a way that is almost invulnerable to severe label noise by utilizing a tiny trusted set. Our method, named IEG, is based on three key factors: (i) Isolation of noisy labels, (ii) Escalation of useful supervision from mislabeled data, and (iii) Guidance from small trusted data. On CIFAR100 with a 40\% uniform noise ratio and 10 trusted labeled data per class, our method achieves $80.2{\pm}0.3\%$ classification accuracy, only 1.4\% higher error than a neural network trained without label noise. Moreover, increasing the noise ratio to 80\%, our method still achieves a high accuracy of $75.5{\pm}0.2\%$, compared to the previous best 47.7\%. Finally, our method sets new state of the art on various types of challenging label corruption levels and large-scale WebVision benchmarks.
Keywords: Robust deep learning, label noise
Original Pdf: pdf
12 Replies

Loading