Abstract: Source-free domain generalization (SFDG) is an emerging paradigm of domain generalization problems that aims to train a model generalizable to unseen target domains without any actual source data. Existing methods have pragmatically tackled this challenging problem by using large-scale vision-language pre-trained models (VLMs). However, they cannot fully exploit the potential of VLMs. Although the favorable configurations should inherently differ for each input domain, those methods employ a domain-shared configuration (e.g., classifier, representation) for data inputs from any domain, which undermines the flexibility for various domains. In this paper, we propose a novel SFDG method called Domain Prompt-Discriminator Collaborative Learning. Our method endows the model with domain flexibility by jointly training the following two modules: (1) various domain-specific prompts to enhance generalizability to unseen domains and (2) a domain discriminator to choose favorable domains for the input image. Notably, this training can be carried out in the form of simple domain classification learning. Our method design also opens the door to a simple yet effective test-time adaptation technique that further boosts recognition accuracy. We empirically demonstrate that our method consistently outperforms state-of-the-art SFDG methods on four domain generalization benchmarks.
External IDs:dblp:conf/ijcnn/MitsuzumiKK25
Loading