Keywords: Few-shot Learning, Active Learning, Image Classification
TL;DR: We investigate the utility of actively selecting support instances on the few-shot learning task
Abstract: Few-shot learning aims to understand new concepts by transferring knowledge and utilizing very few randomly selected labeled samples. Instead of selecting these instances randomly, active learning provides a promising alternative. In this work, we investigate the effectiveness of actively identifying informative samples on the performance of few-shot learning models. We show that despite the fact that regular classification tasks with larger amounts of labeled data benefit from active learning approaches, these benefits do not reliably generalize to the few-shot learning task. We characterize the best possible active few-shot learning performance, by introducing Single-Instance-Oracle and Batch-Oracle as active methods that assume access to labels of the unlabeled pool and the test set, and show via these “upper bounds” that we do not have a significant room for improving few-shot models through actively selecting instances.