Improving the Effective Coverage Space for Source-Free Domain Generalization via Visual-Language Models
Keywords: Source-Free Domain Generalization, Visual-Language Model, Effective Coverage Space, Dynamic Feature Generation and Fusion
Abstract: With the widespread application of deep learning in computer vision, deep models often experience a significant drop in performance when facing unseen data, which negatively impacts their practical deployment. In this work, a dynamic feature construction and fusion method (DFCF) based on vision-language models is proposed for the task of source-free domain generalization. This method introduces the concept of Effective Coverage Space (ECS) and utilizes vision-language models to dynamically generate diverse feature representations and construct a virtual dataset, which transforms the source-free domain generalization into a supervised learning task. In the absence of source domain images, the effective coverage of the feature space is extended by improving the diversity of styles and features, thereby enhancing the model's adaptability to the unseen domain. Experimental results demonstrate that this method significantly improves performance of source-free domain generalization tasks across multiple datasets, effectively enhancing the generalization capability of the model.
Supplementary Material: zip
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 17347
Loading