Keywords: few-shot learning, ensemble learning, deep learning, machine learning, image classification
TL;DR: We develop an efficient (i.e: computation-efficient, convenient, universally adaptable) ensemble method which can significantly boost the performance of varioius few-shot learning algorithms.
Abstract: Due to a lack of labeled training data and the subsequent unreliability in empirical risk minimization, few-shot learning algorithms usually suffer from high variance and bias in their predictions. Ensemble, on the other hand, combines predictions from multiple predictors, thus alleviating the aforementioned unreliability. Essentially, we believe that ensemble is a simple yet effective solution to tackle the core problems in few-shot learning; therefore, we develop a plug-in (ensemble) method to boost the performance of trained few-shot models. To maximize the performance of ensemble, we use epoch-training to develop the feature representations used in the plug-in; in contrast, episodic training is used to obtain the feature representations of the original few-shot models. To minimize the extra computation cost induced by ensemble, we adopt a non-deep classifier (e.g: random forest) for the plug-in, which can complete its training within a few seconds. Our method achieves substantial improvements for the few-shot learning, consistently outperforming all the baseline methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
4 Replies
Loading