Keywords: Retraining; Label Noise; Classification; Model Accuracy; Label DP; Hard Label
TL;DR: We provide the first theoretical result showing that retraining a model with its predicted hard labels can improve accuracy and empirically demonstrate its efficacy as a simple way to improve local label DP training at no extra privacy cost.
Abstract: The performance of a model trained with *noisy labels* is often improved by simply *retraining* the model with its own predicted *hard* labels (i.e., $1$/$0$ labels). Yet, a detailed theoretical characterization of this phenomenon is lacking. In this paper, we theoretically analyze retraining in a linearly separable setting with randomly corrupted labels given to us and prove that retraining can improve the population accuracy obtained by initially training with the given (noisy) labels. To the best of our knowledge, this is the first such theoretical result. Retraining finds application in improving training with local label differential privacy (DP) which involves training with noisy labels. We empirically show that retraining selectively on the samples for which the predicted label matches the given label significantly improves label DP training at *no extra privacy cost*; we call this *consensus-based retraining*. As an example, when training ResNet-18 on CIFAR-100 with $\epsilon=3$ label DP, we obtain $6.4$% improvement in accuracy with consensus-based retraining.
Primary Area: learning theory
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8276
Loading