A Unified Few-Shot Classification Benchmark to Compare Transfer and Meta Learning ApproachesDownload PDF

07 Jun 2021, 13:37 (modified: 12 Oct 2021, 13:16)NeurIPS 2021 Datasets and Benchmarks Track (Round 1)Readers: Everyone
Keywords: transfer learning, meta-learning, few-shot classification
TL;DR: A benchmark with a low barrier of entry to perform a direct comparison of recent approaches emerging from both the transfer learning and meta-learning research communities.
Abstract: Meta and transfer learning are two successful families of approaches to few-shot learning. Despite highly related goals, state-of-the-art advances in each family are measured largely in isolation of each other. As a result of diverging evaluation norms, a direct or thorough comparison of different approaches is challenging. To bridge this gap, we introduce a few-shot classification evaluation protocol named VTAB+MD with the explicit goal of facilitating sharing of insights from each community. We demonstrate its accessibility in practice by performing a cross-family study of the best transfer and meta learners which report on both a large-scale meta-learning benchmark (Meta-Dataset, MD), and a transfer learning benchmark (Visual Task Adaptation Benchmark, VTAB). We find that, on average, large-scale transfer methods (Big Transfer, BiT) outperform competing approaches on MD, even when trained only on ImageNet. In contrast, meta-learning approaches struggle to compete on VTAB when trained and validated on MD. However, BiT is not without limitations, and pushing for scale does not improve performance on highly out-of-distribution MD tasks. We hope that this work contributes to accelerating progress on few-shot learning research.
Supplementary Material: zip
URL: https://github.com/google-research/meta-dataset/blob/main/VTAB-plus-MD.md
9 Replies

Loading