Prompt-induced prototype alignment for few-shot unsupervised domain adaptation

Published: 01 Jan 2025, Last Modified: 15 May 2025Expert Syst. Appl. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Unsupervised Domain Adaptation excels in transferring predictive models from a labeled source domain to an unlabeled target domain. However, acquiring sufficient source domain samples in specific real-world applications is challenging. To address this issue, previous work introduced few-shot unsupervised domain adaptation to explore domain adaptation methods with lower requirements for the number of source domain samples. Despite progress, they still require a relatively higher number of source domain samples and exhibit significantly lower performance than methods using ample source domain samples. In this paper, we explore a more realistic and challenging scenario for few-shot unsupervised domain adaptation, where the source domain contains only a few samples per category. To extract more information from this limited data, we introduce the vision–language pre-trained model CLIP as the backbone network and propose a prompt-guided prototype alignment network. Specifically, we use category text features obtained from domain-shared soft prompts as class-specific prototypes and align cross-domain image features with these shared prototypes. During the alignment process, to reduce the impact of erroneous information in pseudo-labels, we design a sample weighting method based on truncated Laplace distribution and an alignment method that mines implicit negative information in pseudo-labels based on complementary labels. Ultimately, experiments conducted on several domain adaptation benchmark datasets demonstrate that our method offers significant advantages in scenarios with limited source domain samples and achieves competitive performance compared to unsupervised domain adaptation methods that rely on ample labeled samples.
Loading