One-shot Federated Learning with Training-Free Client

15 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: one-shot federated learning, statistical heterogeneity.
Abstract: While traditional iterative federated learning (FL) is often limited by various factors, such as massive communication overhead, higher risk of being attacked, and fault tolerance requirements, an emerging and promising solution is to conduct FL with a single communication round, termed one-shot FL. However, a lack of continuous communication leads to the serious performance degradation of current FL frameworks, especially training with statistical heterogeneity, i.e., non-IID. The primary objective of this paper is to develop an effective and efficient one-shot FL framework to better deal with statistical heterogeneity. To achieve this, we first revisit the influence of statistical heterogeneity on model optimization and observe that conventional mechanism (i.e., training from scratch and parameter averaging) is inadvisable for one-shot FL due to the problem of client drift. Based on this observation, we propose a novel one-shot FL framework, namely FedTC. Different from existing methods, FedTC divides the model into backbone and head, deploying them separately on the client and server sides. Specifically, our approach does not directly train the whole model on biased local datasets from scratch, but only learns a detached head through unbiased class prototypes estimated by the pre-trained backbone. Moreover, we integrate the feature outlier filtering strategy and adapter into our FedTC to further improve its performance. Extensive experiments demonstrate that FedTC can significantly outperform several state-of-the-art one-shot FL approaches with extremely low communication and computation costs. The source code will be released.
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 328
Loading