Abstract: Few-shot learning (FSL) is a central problem in meta-learning, where learners must efficiently learn
from few labeled examples. Within FSL, feature pre-training has recently become an increasingly
popular strategy to significantly improve generalization performance. However, the contribution
of pre-training is often overlooked and understudied, with limited theoretical understanding of
its impact on meta-learning performance. Further, pre-training requires a consistent set of global
labels shared across training tasks, which may be unavailable in practice. In this work, we address
the above issues by first showing the connection between pre-training and meta-learning. We
discuss why pre-training yields more robust meta-representation and connect the theoretical
analysis to existing works and empirical results. Secondly, we introduce Meta Label Learning
(MeLa), a novel meta-learning algorithm that learns task relations by inferring global labels across
tasks. This allows us to exploit pre-training for FSL even when global labels are unavailable or
ill-defined. Lastly, we introduce an augmented pre-training procedure that further improves the
learned meta-representation. Empirically, MeLa outperforms existing methods across a diverse
range of benchmarks, in particular under a more challenging setting where the number of training
tasks is limited and labels are task-specific. We also provide extensive ablation study to highlight
its key properties.
0 Replies
Loading