Source-Free Few-Shot Domain AdaptationDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Keywords: domain adaptation, few-shot learning, model finetuning
Abstract: Deep models are prone to performance degradation when there is a domain shift between the source (training) data and target (test) data. Test-time adaptation of pre-trained source models with streaming unlabelled target data is an attractive setting that protects the privacy of source data, but it has mini-batch size and class-distribution requirements on the streaming data which might not be desirable in practice. In this paper, we propose the source-free few-shot adaptation setting to address these practical challenges in deploying test-time adaptation. Specifically, we propose a constrained optimization of source model batch normalization layers by finetuning linear combination coefficients between training and support statistics. The proposed method is easy to implement and improves source model performance with as little as one labelled target sample per class. We evaluate on different multi-domain classification datasets. Experiments demonstrate that our proposed method achieves comparable or better performance than test-time adaptation, while not constrained by streaming conditions.
One-sentence Summary: Constrained optimization of batch normalization layers improves pre-trained source model performance under domain shift with limited support data.
17 Replies

Loading