GPT-FL: Generative Pre-Trained Model-Assisted Federated Learning

TMLR Paper2064 Authors

17 Jan 2024 (modified: 18 Apr 2024)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: In this work, we propose GPT-FL, a generative pre-trained model-assisted federated learning (FL) framework. At its core, GPT-FL leverages generative pre-trained models to generate diversified synthetic data. These generated data are used to train a downstream model on the server, which is then fine-tuned with private client data under the standard FL framework. We show that GPT-FL consistently outperforms state-of-the-art FL methods in terms of model test accuracy, communication efficiency, and client sampling efficiency. Through comprehensive ablation analysis, we discover that the downstream model generated by synthetic data plays a crucial role in controlling the direction of gradient diversity during FL training, which enhances convergence speed and contributes to the notable accuracy boost observed with GPT-FL. Also, regardless of whether the target data falls within or outside the domain of the pre-trained generative model, GPT-FL consistently achieves significant performance gains, surpassing the results obtained by models trained solely with FL or synthetic data.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Ahmad_Beirami1
Submission Number: 2064
Loading