Label Noise: Correcting the Forward Correction

TMLR Paper3379 Authors

23 Sept 2024 (modified: 11 Oct 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Training neural network classifiers on datasets with label noise poses a risk of overfitting them to the noisy labels. To address this issue, researchers have explored alternative loss functions that aim to be more robust. The `forward correction' is a popular approach wherein the model outputs are noised before being evaluated against noisy data. When the true noise model is known, applying the forward correction guarantees the consistency of the learning algorithm. While providing some benefit, the correction is insufficient to prevent overfitting to finite noisy datasets. This work proposes an approach to tackling overfitting caused by label noise. We observe that the presence of label noise implies a lower bound on the noisy generalised risk. Motivated by this observation, we propose imposing a lower bound on the training loss to mitigate overfitting. Our main contribution is providing theoretical insights that allow us to approximate the lower bound, given only an estimate of the average noise rate. We empirically demonstrate that using this bound significantly enhances robustness in various settings with virtually no additional computational cost.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yu_Yao3
Submission Number: 3379
Loading