Track: long paper (up to 8 pages)
Keywords: representation learning, semi-supervised few-shot learning, image classification
Abstract: Few-shot learning helps models perform effectively in scenarios with very few labeled samples per class. Semi-supervised FSL enables the use of abundant unlabeled samples, which are cheap to collect and can improve performance. Some of the recent methods for this setting rely on clustering to generate pseudo-labels for the unlabeled samples. Since the effectiveness of clustering heavily influences the labeling of the unlabeled samples, it can significantly affect the few-shot learning performance. In this paper, we focus on improving the representation learned by the model in order to improve the clustering and, consequently, the model performance. We propose an approach for semi-supervised few-shot learning that performs a class-variance optimized clustering coupled with a cluster separation tuner in order to improve the effectiveness of clustering the labeled and unlabeled samples in this setting. It also optimizes the clustering-based pseudo-labeling process using a restricted pseudo-labeling approach and performs semantic information injection in order to improve the semi-supervised few-shot learning performance of the model. Experiments show our method outperforms recent state-of-the-art methods on benchmark datasets and remains robust under domain shifts and open-set settings with distractor classes.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 123
Loading