Revisiting Feature Normalization and Augmentation in Few-shot Learning: a Simple Comparison of BaselinesDownload PDF

Anonymous

30 Sept 2021 (modified: 05 May 2023)NeurIPS 2021 Workshop MetaLearn Blind SubmissionReaders: Everyone
Keywords: Few-shot Learning, Feature Augmentation, Feature Normalization
TL;DR: We found feature augmentation and normalization are also important for few-shot learning in the inference time.
Abstract: The growing complexity of network designs makes Few-Shot Learning (FSL) algorithms difficult to compare in a fair manner. Typical FSL methods consist of two parts: a feature extractor trained on base classes, and a predictor tested on a given support set task. Most existing research aim to improve the feature extractor for better generalization, whereas a small number of papers note the predictor design is also important. In this paper, we investigate the predictor module which relies on three components: feature normalization (and transformation), feature augmentation, and a classifier. In comparative ablation experiments we show that (i) with appropriate feature normalization, logistic regression and SVM perform the best in most cases rather than the cosine or nearest neighbour classifier; (ii) feature normalization is very important and the feature augmentation is very helpful in one-shot learning; and (iii) our modified baseline methods (a good selection of existing components) achieve competitive performance when compared with the state of the art on mini-ImageNet, tiered-ImageNet, and the CUB datasets.
0 Replies

Loading