Abstract: Few-shot classification aims to learn a classifier to recognize unseen classes during training with limited labeled examples. While significant progress has been made, the growing complexity of network designs, meta-learning algorithms, and differences in implementation details make a fair comparison difficult. In this paper, we present 1) a consistent comparative analysis of several representative few-shot classification algorithms, with results showing that deeper backbones significantly reduce the gap across methods including the baseline, 2) a slightly modified baseline method that surprisingly achieves competitive performance when compared with the state-of-the-art on both the mini-ImageNet and the CUB datasets, and 3) a new experimental setting for evaluating the cross-domain generalization ability for few-shot classification algorithms. Our results reveal that reducing intra-class variation is an important factor when the feature backbone is shallow, but not as critical when using deeper backbones. In a realistic, cross-domain evaluation setting, we show that a baseline method with a standard fine-tuning practice compares favorably against other state-of-the-art few-shot learning algorithms.
Keywords: few shot classification, meta-learning
TL;DR: A detailed empirical study in few-shot classification that revealing challenges in standard evaluation setting and showing a new direction.
Code: [![github](/images/github_icon.svg) wyharveychen/CloserLookFewShot](https://github.com/wyharveychen/CloserLookFewShot) + [![Papers with Code](/images/pwc_icon.svg) 12 community implementations](https://paperswithcode.com/paper/?openreview=HkxLXnAcFQ)
Data: [CUB-200-2011](https://paperswithcode.com/dataset/cub-200-2011), [ImageNet](https://paperswithcode.com/dataset/imagenet), [mini-Imagenet](https://paperswithcode.com/dataset/mini-imagenet)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/a-closer-look-at-few-shot-classification/code)
23 Replies
Loading