Learning Classifier Synthesis for Generalized Few-Shot LearningDownload PDF

25 Sept 2019 (modified: 05 May 2023)ICLR 2020 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Generalized Few-Shot Learning (GFSL), Few-Shot Learning, Meta-Learning
TL;DR: We propose to learn synthesizing few-shot classifiers and many-shot classifiers using one single objective function for GFSL.
Abstract: Object recognition in real-world requires handling long-tailed or even open-ended data. An ideal visual system needs to reliably recognize the populated visual concepts and meanwhile efficiently learn about emerging new categories with a few training instances. Class-balanced many-shot learning and few-shot learning tackle one side of this problem, via either learning strong classifiers for populated categories or learning to learn few-shot classifiers for the tail classes. In this paper, we investigate the problem of generalized few-shot learning (GFSL) -- a model during the deployment is required to not only learn about "tail" categories with few shots, but simultaneously classify the "head" and "tail" categories. We propose the Classifier Synthesis Learning (CASTLE), a learning framework that learns how to synthesize calibrated few-shot classifiers in addition to the multi-class classifiers of ``head'' classes, leveraging a shared neural dictionary. CASTLE sheds light upon the inductive GFSL through optimizing one clean and effective GFSL learning objective. It demonstrates superior performances than existing GFSL algorithms and strong baselines on MiniImageNet and TieredImageNet data sets. More interestingly, it outperforms previous state-of-the-art methods when evaluated on standard few-shot learning.
Original Pdf: pdf
5 Replies

Loading