EFFICIENT UTILIZATION OF PRE-TRAINED MODEL FOR LEARNING WITH NOISY LABELSDownload PDF

Published: 04 Mar 2023, Last Modified: 31 Mar 2023ICLR 2023 Workshop on Trustworthy ML OralReaders: Everyone
Keywords: Learning with Noisy Label, Pre-trained Model, Noisy Label, Robust Training
TL;DR: We propose an efficient method for training using pre-trained models to clean up noisy labels.
Abstract: In machine learning, when the labels within a training dataset are incorrect, the performance of the trained model gets severely affected. To address this issue, various methods have been researched in the field of Learning with Noisy Labels. These methods aim to identify the accurate samples and focus on them, while minimizing the impact of incorrect labels. Recent studies have demonstrated good performance on various tasks using large pre-trained models that extract good features regardless of the given labels. However, to address the noisy label problem, leveraging these pre-trained models have still remained unexplored due to the computational cost of fine-tuning. In this study, we propose an algorithm named EPL that utilizes pre-trained models to effectively cleanse the noisy labels and strengthen the robust training. The algorithm follows two main principles: (1) increasing computational efficiency by adjusting the linear classifier alone, and (2) cleaning only the well-clustered classes to avoid creating extra incorrect labels in poorly-clustered classes. We tested and verified that the proposed algorithm shows significant improvement on various benchmarks in comparison to previous methods.
0 Replies

Loading