Keywords: few-shot learning, image classification, distribution estimation
Abstract: Learning from a limited number of samples is challenging since the learned model can easily become overfitted based on the biased distribution formed by only a few training examples. In this paper, we calibrate the distribution of these few-sample classes by transferring statistics from the classes with sufficient examples. Then an adequate number of examples can be sampled from the calibrated distribution to expand the inputs to the classifier. We assume every dimension in the feature representation follows a Gaussian distribution so that the mean and the variance of the distribution can borrow from that of similar classes whose statistics are better estimated with an adequate number of samples. Our method can be built on top of off-the-shelf pretrained feature extractors and classification models without extra parameters. We show that a simple logistic regression classifier trained using the features sampled from our calibrated distribution can outperform the state-of-the-art accuracy on three datasets (~5% improvement on miniImageNet compared to the next best). The visualization of these generated features demonstrates that our calibrated distribution is an accurate estimation.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
One-sentence Summary: The code is available at: https://github.com/ShuoYang-1998/Few_Shot_Distribution_Calibration
Code: [![github](/images/github_icon.svg) ShuoYang-1998/ICLR2021-Oral_Distribution_Calibration](https://github.com/ShuoYang-1998/ICLR2021-Oral_Distribution_Calibration) + [![Papers with Code](/images/pwc_icon.svg) 5 community implementations](https://paperswithcode.com/paper/?openreview=JWOiYxMG92s)
Data: [mini-Imagenet](https://paperswithcode.com/dataset/mini-imagenet), [tieredImageNet](https://paperswithcode.com/dataset/tieredimagenet)
13 Replies
Loading