Adversarial Examples Guided Pseudo-label Refinement for Decentralized Domain AdaptationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Domain Adaptation, Federated Learning
Abstract: Unsupervised domain adaptation (UDA) methods usually assume data from multiple domains can be put together for centralized adaptation. Unfortunately, this assumption impairs data privacy, which leads to the failure of traditional methods in practical scenarios. To cope with the above issue, we present a new approach named Adversarial Examples Guided Pseudo-label Refinement for Decentralized Domain Adaptation (AGREE), which conducts target adaptation in an iterative training process during which only models can be delivered across domains. More specifically, to train a promising target model, we leverage Adversarial Examples (AEs) to filter out error prone predictions of source models towards each target sample based on both robustness and confidence, and then treat the most frequent prediction as the pseudo-label. Besides, to improve central model aggregation, we introduce Knowledge Contribution (KC) to compute reasonable aggregation weights. Extensive experiments conducted on several standard datasets verify the superiority of the proposed AGREE. Especially, our AGREE achieves the new state-of-the-art performance on the DomainNet and Office-Caltech10 datasets. The implementation code will be publicly available.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading