Keywords: Pre-Trained Classifier, Long-Tailed Learning, Transfer Learning
TL;DR: We propose PTClf, a new fine-tuning paradigm that exploits the pre-trained classifier to guide the re-trained classifier in learning tail classes.
Abstract: Fine-tuning for long-tailed learning has garnered significant interest owing to the strong priors in foundation models. A prevailing approach is to explore various long-tailed strategies under the standard fine-tuning paradigm, in which the model is initialized from the pre-trained backbone, while the pre-trained classifier is discarded and replaced with a newly re-trained one. However, we observe that, under tail data scarcity, this newly re-trained classifier suffers from weakened discriminative ability and semantic awareness, exhibiting severe imbalance in class-discriminative channels and mislearning general features for tail classes; in contrast, the pre-trained classifier behaves much closer to the oracle, highlighting its strong potential as an effective guide. Motivated by this, we propose a new fine-tuning paradigm, PTClf (Pre-Trained Classifier helps), which exploits the pre-trained classifier to assist the re-trained one in learning tail classes. Specifically, we align downstream classes to upstream classes via label mapping, and guide the re-trained classifier to learn from the mapped pre-trained classifier through initialization and regularization, thereby transferring knowledge from related upstream classes to the data-scarce tail classes. Extensive experiments show that PTClf delivers remarkable benefits for long-tailed data, especially for tail classes, while also exhibiting strong versatility in low-shot learning and domain generalization.
Supplementary Material: pdf
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 7593
Loading