What Has Been Overlooked in Contrastive Source-Free Domain Adaptation: Leveraging Source-Informed Latent Augmentation within Neighborhood Context
Abstract: Source-free domain adaptation (SFDA) involves adapting a model originally trained using a labeled dataset (source domain) to perform effectively on an unlabeled dataset (target domain) without relying on any source data during adaptation. This adaptation is especially crucial when significant disparities in data distributions exist between the two domains and when there are privacy concerns regarding the source model's training data. The absence of access to source data during adaptation makes it challenging to analytically estimate the domain gap. To tackle this issue, various techniques have been proposed, such as unsupervised clustering, contrastive learning, and continual learning. In this paper, we first conduct an extensive theoretical analysis of SFDA based on contrastive learning, primarily because it has demonstrated superior performance compared to other techniques. Motivated by the obtained insights, we then introduce a straightforward yet highly effective latent augmentation method tailored for contrastive SFDA. This augmentation method leverages the dispersion of latent features within the neighborhood of the query sample, guided by the source pre-trained model, to enhance the informativeness of positive keys. Our approach, based on a single InfoNCE-based contrastive loss, outperforms state-of-the-art SFDA methods on widely recognized benchmark datasets.
Submission Length: Regular submission (no more than 12 pages of main content)
Supplementary Material: zip
Changes Since Last Submission: Following the recommendations from our Action Editor and reviewers, we have made several additions and revisions to our manuscript:
1. ***Appendix A.6*** details the computational efficiency analysis.
2. ***Appendix A.8*** discusses the sensitivity analysis on the effect of source pre-training.
3. ***Appendix A.7*** contains the ablation studies validating the effectiveness of our SiLAN in enhancing other source-free domain adaptation methods.
4. ***Appendix A.9*** presents experiments using vision transformers as the backbone.
5. ***Appendices C.1*** and ***C.2*** elaborate on the intuition behind ***Proposition 3*** and ***Lemma 4***, respectively.
6. ***Appendix C.3*** provides a detailed derivation of ***Equation 3***.
7. ***Appendix C.4*** discusses the connections among the three theoretical insights from our analysis.
8. Additionally, we have included the ***SF(DA)$^2$*** baseline [1] for comparison in ***Tables 1***, ***2***, and ***3***, as suggested.
9. We have also revised ***Paragraph 5*** to enhance clarity, following a reviewer's suggestion.
**References**
[1] Uiwon Hwang, Jonghyun Lee, Juhyeon Shin, and Sungroh Yoon. Sf(da)^2: Source-free domain adaptation through the lens of data augmentation. In The Twelfth International Conference on Learning Representations (ICLR), 2024.
Video: https://youtu.be/A0YLvomkOwU
Code: https://github.com/JingWang18/SiLAN
Certifications: Featured Certification
Assigned Action Editor: ~Eleni_Triantafillou1
Submission Number: 2099
Loading