POODLE: Improving Few-shot Learning via Penalizing Out-of-Distribution SamplesDownload PDF

May 21, 2021 (edited Jan 22, 2022)NeurIPS 2021 PosterReaders: Everyone
  • Keywords: few-shot learning
  • TL;DR: We leverage samples from distractor classes or randomly generated noise to improve the generalization of few-shot learner
  • Abstract: In this work, we propose to use out-of-distribution samples, i.e., unlabeled samples coming from outside the target classes, to improve few-shot learning. Specifically, we exploit the easily available out-of-distribution samples to drive the classifier to avoid irrelevant features by maximizing the distance from prototypes to out-of-distribution samples while minimizing that of in-distribution samples (i.e., support, query data). Our approach is simple to implement, agnostic to feature extractors, lightweight without any additional cost for pre-training, and applicable to both inductive and transductive settings. Extensive experiments on various standard benchmarks demonstrate that the proposed method consistently improves the performance of pretrained networks with different architectures.
  • Supplementary Material: pdf
  • Code Of Conduct: I certify that all co-authors of this work have read and commit to adhering to the NeurIPS Statement on Ethics, Fairness, Inclusivity, and Code of Conduct.
  • Code: https://github.com/VinAIResearch/poodle
15 Replies

Loading