Are Few-shot Learning Benchmarks Too Simple ?Download PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
TL;DR: Omniglot and miniImageNet are too simple for few-shot learning because we can solve them without using labels during meta-evaluation, as demonstrated with a method called centroid networks
Abstract: We argue that the widely used Omniglot and miniImageNet benchmarks are too simple because their class semantics do not vary across episodes, which defeats their intended purpose of evaluating few-shot classification methods. The class semantics of Omniglot is invariably “characters” and the class semantics of miniImageNet, “object category”. Because the class semantics are so similar, we propose a new method called Centroid Networks which can achieve surprisingly high accuracies on Omniglot and miniImageNet without using any labels at metaevaluation time. Our results suggest that those benchmarks are not adapted for supervised few-shot classification since the supervision itself is not necessary during meta-evaluation. The Meta-Dataset, a collection of 10 datasets, was recently proposed as a harder few-shot classification benchmark. Using our method, we derive a new metric, the Class Semantics Consistency Criterion, and use it to quantify the difficulty of Meta-Dataset. Finally, under some restrictive assumptions, we show that Centroid Networks is faster and more accurate than a state-of-the-art learning-to-cluster method (Hsu et al., 2018).
Code: https://drive.google.com/open?id=1GZgwaY4_zQizv2tc6IhaYYO-Zam0CHZi
Keywords: few-shot, classification, meta-learning, benchmark, omniglot, miniimagenet, meta-dataset, learning to cluster, learning, cluster, unsupervised
Original Pdf: pdf
11 Replies

Loading