Generalized Sampling Method for Few Shot LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Few-shot learning, distribution estimation, sampling method
Abstract: Few shot learning is an important problem in machine learning as large labelled datasets take considerable time and effort to assemble. Most few-shot learning algorithms suffer from one of two limitations--- they either require the design of sophisticated models and loss functions, thus hampering interpretability; or employ statistical techniques but make assumptions that may not hold across different datasets or features. Developing on recent work in extrapolating distributions of small sample classes from the most similar larger classes, we propose a Generalized Sampling method that learns to estimate few-shot distributions for classification as weighted random variables of all large classes. We use a form of covariance shrinkage to provide robustness against singular covariances due to overparameterized features or small datasets. We show that a single hyperparameter in our method matches the accuracies from Euclidean, Mahalanobis and other forms of distances used for estimating the weights of random variables. Our method works with arbitrary off-the-shelf feature extractors and outperforms existing state-of-the-art on miniImagenet, CUB and Stanford Dogs datasets by 3% to 5% on 5way-1shot and 5way-5shot tasks.
One-sentence Summary: We propose a statistical method for estimating few-shot distribution that works with any off-the-shelf feature extractor and gives 3% to 5% accuracy improvement compared to state-of-the-art.
20 Replies

Loading