AudoFormer: An Efficient Transformer with Consistent Auxiliary Domain for Source-free Domain Adaptation

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: Unsupervised learning, Transfer learning, Source-free domain adaptation, Vision transformer
TL;DR: An efficient transformer with consistent auxiliary domain to improve the source-free domain adaptation
Abstract: Source-free domain adaptation (SFDA), which tackles domain adaptation without accessing the source domain directly, has gradually gained widespread attention. However, due to the inaccessibility of source domain data, deterministic invariable features cannot be obtained. Current advanced methods mainly evaluate pseudo-labels or consistent neighbor labels for self-supervision, which are susceptible to hard samples and affected by domain bias. In this paper, we propose an efficient transFormer with a consistent Auxiliary domain for source-free domain adaptation, abbreviated as AudoFormer, which solves the invariable feature representation from a new perspective by domain consistency. Concretely, AudoFormer constructs an auxiliary domain module (ADM) block, which can achieve diversified representations from the global attention feature in the intermediate layers. Then based on the auxiliary domain and target domain, we distinguish invariable feature representation by exploiting multiple consistency strategies, i.e., dynamically evaluated consistent labels and consistent neighbors, which can divide the whole target samples into source-like easy samples and target-specific hard samples. Finally, we align the source-like with the target-specific samples by conditional guided multi-kernel max mean discrepancy (CMK-MMD), which guides the hard samples to align the corresponding easy samples. To verify the effectiveness, we conduct extensive experiments on three benchmark datasets (i.e., Office-31, Office-Home, and VISDA-C). Results show that our approach achieves significant performance among multiple domain adaptation benchmarks compared to the other state-of-the-art baselines. Code will be available.
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6780
Loading