Pick and Adapt: An Iterative Approach for Source-Free Domain Adaptation

24 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Primary Area: transfer learning, meta learning, and lifelong learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: representation learning, domain adaptation
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Domain adaptation plays a pivotal role in deploying models when inference data distribution is different from the training data. It becomes particularly challenging in source-free domain adaptation (SFDA) scenarios, where access to the source domain data is restricted due to data privacy concern. To tackle such cases, existing approaches often resort to generating source-like data for standard unsupervised domain adaptation or endeavor to fine-tune a model pre-trained on a source domain using self-supervised training techniques. Instead, our approach strikes a different path by theoretically analyzing into an empirical risk bound for SFDA. We identify the population risk and domain drift as the major factors from the risk bound. Subsequently, we introduce a top-k importance sampling to purify the pseudo labeling and thus reduce the population risk. We further present a nearest neighbor voting based semantic domain alignment to mitigate the domain drift. An iterative optimization is finally proposed to combine the above two steps for multiple rounds. Extensive experiments across three widely applied domain adaptation datasets, i.e., Office-Home, DomainNet, and VisDA-C, demonstrate the consistently advantageous performance over the state-of-the-art methods.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 8728
Loading