Customizing Global Model for Diverse Target Distributions in Federated Learning

23 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: federated learning, self-supervised learning
Abstract: Federated learning (FL) is a privacy-preserving approach to train a global model on decentralized data. Most existing FL algorithms optimize the global model by minimizing the average loss among clients, aiming to perform well on commonly assumed uniform target data distribution. In practice, though, the need often arises for a tailored model to excel on its specific unlabeled target dataset with arbitrary distribution. The misalignment of the assumed and actual target distribution violates the plausible uniform assumption and thus undermines the effectiveness of vanilla FL methods. To fill this gap, we propose FedSSA, a self-supervised aggregation method capable of training a specific global model for specific target data. FedSSA leverages the target dataset on the server side to dynamically learn aggregation weights for local models in a self-supervised manner. These aggregation weights are iteratively adjusted to promote transformation-invariant. With extensive qualitative and quantitative experiments, we demonstrate that FedSSA consistently outperforms 12 classical baselines across multiple datasets, heterogeneity scenarios and different target distributions. Furthermore, we showcase the plug-and-play property of FedSSA by combining it with various FL methods.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6963
Loading