Multi-Source Knowledge-Fusion for Source-Free Domain Adaptation in Object Detection

ICLR 2026 Conference Submission22533 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multi-source learning, domain adaptation
Abstract: Source-free domain adaptation (SFDA) enables adaptation to a target domain without access to source data or labeled target samples, making it particularly valuable in privacy-sensitive applications such as military operations and healthcare. To leverage complementary and transferable knowledge from multiple source domains, multi-source-free domain adaptation (MSFDA) extends SFDA by collectively adapting pre-trained models from multiple sources. However, a key challenge in MSFDA is the significant distribution shift among multiple source and target domains, which often leads to suboptimal performance, especially in complex tasks like object detection. To address this, we propose a novel multi-source knowledge-fusion framework that effectively aggregates knowledge from multiple sources and mitigates distribution discrepancies. We first conduct text-driven feature augmentation that narrows the semantic gap by transforming unlabeled target images into source-stylized images using only textual descriptions of each source domain, such that the pre-trained source models are directly applicable. Each domain expert is then updated with its respective stylized target images, while the aggregator undergoes both local and global updates to ensure stable adaptation. To further improve pseudo-label quality, peer network-based confidence selection is performed to filter out noisy labels. Our method achieves state-of-the-art performance on multiple real-world datasets, demonstrating its effectiveness in multi-source free domain adaptation.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 22533
Loading