Abstract: Despite significant progress in recent deep neural networks, most deep learning algorithms rely heavily on abundant training samples. To address this problem, we propose an effective and interpretable few-shot classification model using Saliency-Guided Complementary Attention (SGCA), which aims to learn transferable representations and to build a robust classification module simultaneously. Concretely, we propose to train our feature extractor using an auxiliary task to separate object regions from background clutter guided by saliency detection signals. In addition, to make the separation beneficial to the downstream tasks, we introduce a complementary attention mechanism to force the classification module to focus on various informative parts of the image. Extensive experiments on few-shot learning tasks demonstrate the effectiveness of our proposed method, e.g., we achieve 68.81% and 84.60% for 5-way 1-shot and 5-shot settings on mini-ImageNet, respectively.
0 Replies
Loading