Semantic-guided Diffusion Prototypical Network for Few-shot Classification

31 Jul 2024 (modified: 28 Sept 2024)IEEE ICIST 2024 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Few-shot learning recognizes unlabeled samples from new classes using only a few of samples. Many recently proposed approaches have made progress based on meta-learning. However, current methods often overlook the category information within the query set and thus the obtained prototype is always unreliable. To tackle this issue, we design a Semantic-guided Diffusion Prototypical Network (SDPN) to generate representative prototypes for few-shot classification. Specifically, we leverage self-supervised learning to pre-train the feature extractor, thus obtaining accurately visual features. Furthermore, we introduce a semantic-guided diffusion process that aims to generate semantic features of the query set from random noise for a new task. Then, we introduce a visual-semantic fusion strategy that involves the alignment of semantic features with visual features to obtain representative prototypes that correspond to each image. We perform comprehensive experiments on miniImageNet and tieredImageNet datasets, and the results demonstrate that SDPN achieves enhancements in comparison to state-of-the-art methods.
Submission Number: 37
Loading