Self-training For Few-shot Transfer Across Extreme Task DifferencesDownload PDF

Published: 12 Jan 2021, Last Modified: 22 Oct 2023ICLR 2021 OralReaders: Everyone
Keywords: few-shot learning, self-training, cross-domain few-shot learning
Abstract: Most few-shot learning techniques are pre-trained on a large, labeled “base dataset”. In problem domains where such large labeled datasets are not available for pre-training (e.g., X-ray, satellite images), one must resort to pre-training in a different “source” problem domain (e.g., ImageNet), which can be very different from the desired target task. Traditional few-shot and transfer learning techniques fail in the presence of such extreme differences between the source and target tasks. In this paper, we present a simple and effective solution to tackle this extreme domain gap: self-training a source domain representation on unlabeled data from the target domain. We show that this improves one-shot performance on the target domain by 2.9 points on average on the challenging BSCD-FSL benchmark consisting of datasets from multiple domains.
One-sentence Summary: Self-training a source domain classifier on unlabeled data from the target domain improves cross-domain few-shot transfer.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Code: [![github](/images/github_icon.svg) cpphoo/STARTUP](https://github.com/cpphoo/STARTUP)
Data: [ImageNet](https://paperswithcode.com/dataset/imagenet), [mini-Imagenet](https://paperswithcode.com/dataset/mini-imagenet), [tieredImageNet](https://paperswithcode.com/dataset/tieredimagenet)
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/arxiv:2010.07734/code)
5 Replies

Loading