Towards Fine-tuning-free Few-shot Classification: A Purely Self-supervised Manner

ICLR 2025 Conference Submission13553 Authors

28 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: few-shot learning, variational autoencoder, self-supervised learning
TL;DR: fine-tuning is not necessary for few-shot learning
Abstract: One of the core problems of supervised few-shot classification is adapting generalized knowledge learned from substantial labeled source data to rarely labeled novel target data. What makes it a challenging problem is how to eliminate undesirable inductive bias introduced by labels when learning generalized knowledge during pre-training or adapting the learned knowledge during fine-tuning.In this paper, we propose a purely self-supervised method to bypass the labeling dilemma, focusing on an extreme scenario where a few-shot feature extractor is learned without fine-tuning. Our approach is built on two key observations from recent advancements in style transfer learning and self-supervised learning:1) high-order statistics of feature maps in deep nets encapsulate distinct information about input samples, and 2) high-quality inputs are not essential for obtaining high-quality representations. Accordingly, we introduce a variant of the vector quantized variational autoencoder (VQ-VAE) that incorporates a novel coloring operation, which conveys statistical information from the encoder to the decoder, modulating the generation process with these distinct statistics. With this design, we find that the statistics derived from the encoder's feature maps possess strong discriminative power, enabling effective classification using simple Euclidean distance metrics. Through extensive experiments on standard few-shot classification benchmark. We show that our fine-tuning-free method achieves competitive performance compared to fine-tuning-based and meta-learning-based approaches.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13553
Loading