Enhancing Robustness of Prototype with Attentive Information Guided Alignment in Few-Shot Classification

Published: 01 Jan 2023, Last Modified: 13 Nov 2024PAKDD (1) 2023EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In this paper, we carefully revisit the issues of conventional few-shot learning: i) gaps in highlighted features between objects in support and query samples, and ii) losing the explicit local properties due to global pooled features. Motivated by them, we propose a novel method to enhance robustness in few-shot learning by aligning prototypes with abundantly informed ones. As a way of providing more information, we smoothly augment the support image by carefully manipulating the discriminative part corresponding to the highest attention score to consistently represent the object without distorting the original information. In addition, we leverage word embeddings of each class label to provide abundant feature information, serving as the basis for closing gaps between prototypes of different branches. The two parallel branches of explicit attention modules independently refine support prototypes and information-rich prototypes. Then, the support prototypes are aligned with superior prototypes to mimic rich knowledge of attention-based smooth augmentation and word embeddings. We transfer the imitated knowledge to queries in a task-adaptive manner and cross-adapt the queries and prototypes to generate crucial features for metric-based few-shot learning. Extensive experiments demonstrate that our method consistently outperforms existing methods on four benchmark datasets.
Loading