Privacy Protected Multi-Domain Collaborative LearningDownload PDF

29 Sept 2021 (modified: 13 Feb 2023)ICLR 2022 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Unsupervised domain adaptation (UDA) aims to transfer knowledge from one or more well-labeled source domains to improve model performance on the different-yet-related target domain without any annotations. However, existing UDA algorithms fail to bring any benefits to source domains and neglect privacy protection during data sharing. With these considerations, we define Privacy Protected Multi-Domain Collaborative Learning (P$^{2}$MDCL) and propose a novel Mask-Driven Federated Network (MDFNet) to reach a ``win-win'' deal for multiple domains with data protected. First, each domain is armed with individual local model via a mask disentangled mechanism to learn domain-invariant semantics. Second, the centralized server refines the global invariant model by integrating and exchanging local knowledge across all domains. Moreover, adaptive self-supervised optimization is deployed to learn discriminative features for unlabeled domains. Finally, theoretical studies and experimental results illustrate rationality and effectiveness of our method on solving P$^{2}$MDCL.
Supplementary Material: zip
5 Replies

Loading