Improving RENet by Introducing Modified Cross Attention for Few-Shot ClassificationDownload PDF

16 May 2022 (modified: 05 May 2023)NeurIPS 2022 SubmittedReaders: Everyone
Keywords: few-shot classification, attention
TL;DR: Improvements for a previous work
Abstract: Few-shot classification is challenging since the goal is to classify unlabeled samples with very few labeled samples provided. It has been shown that cross attention helps generate more discriminative features for few-shot learning. This paper extends the idea and proposes two cross attention modules, namely the cross scaled attention (CSA) and the cross aligned attention (CAA). Specifically, CSA scales different feature maps to make them better matched, and CAA adopts the principal component analysis to further align features from different images. Experiments showed that both CSA and CAA achieve consistent improvements over state-of-the-art methods on four widely used few-shot classification benchmark datasets, miniImageNet, tieredImageNet, CIFAR-FS, and CUB-200-2011, while CSA is slightly faster and CAA achieves higher accuracies.
Supplementary Material: pdf
7 Replies

Loading