Towards Debiased Source-Free Domain Adaptation

26 Sept 2024 (modified: 14 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: source-free domain adaptation, sfda, domain adaptation, contrastive learning, spurious correlation, debiasing, debiased sfda
TL;DR: Debiased SFDA aims to alleviate source-learned spurious correlations during target adaptation.
Abstract: Source-Free Domain Adaptation (SFDA) aims to adapt a model trained in an inaccessible source domain $S$ to a different, unlabelled target domain $T$. The conventional approach generates pseudo-labels for the $T$ samples with the source-trained model, which are then used for model adaptation. However, we show that the adapted model is biased to the spurious correlations in $S$, consequently leading to catastrophic failure on $T$ samples that are dissimilar to $S$. Unfortunately, without any prior knowledge about spurious correlations, the current SFDA setting has no mechanism to circumvent this bias. We introduce a practical setting to address this gap -- Debiased SFDA, where the model receives additional supervision from a pre-trained, frozen reference model. This setting stays in line with the essence of SFDA, which accommodates proprietary source-domain training, while also offering prior knowledge that is unaffected by source-domain training to facilitate debiasing. Under this setting, we propose 1) a simple contrastive objective that debiases the source-trained model from spurious correlations inconsistent with the reference model; 2) a diagnostic metric that evaluates the degree to which an adapted model is biased towards $S$. Our objective can be easily plugged into different baselines for debiasing, and through extensive evaluations, we demonstrate that it engenders consistent improvements across standard benchmarks. Code is supplied under supplementary material.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5938
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview