Abstract: Few-shot learning (FSL), which aims to learn from very few labeled examples, is a challenging task but frequently appears in real-world applications. An appealing direction to tackle it is the metric-based method, which seeks to learn a transferable embedding space across different tasks from a related base dataset and generalize it for novel few-shot tasks. Recently, a large corpus of literature has been proposed to design more complicated representation learning methods to improve performance. Despite some promising results, how these methods improve the few-shot performance remains unexplored. Motivated by this question, we investigate the relationship between the performance and the structure of the learned embedding space. We find they are strongly correlated to each other. To capture more valuable features of novel classes, the intra-class distribution of base classes should be more scattered. Therefore, we introduce von Mises-Fisher (vMF) distribution and employ a vMF similarity loss function that uses a concentration parameter, $$\kappa $$ , to control the intra-class distribution on a hypersphere. By setting a smaller $$\kappa $$ , our method can learn a more transferrable embedding space with high intra-class diversity. Extensive experiments on two widely used datasets demonstrate the effectiveness of our method.
0 Replies
Loading