Keywords: Federated Learning, Diffusion Models
Abstract: One-Shot Federated Learning (OSFL) aims to build a global model with a single round of server–client interaction, making it attractive for practical scenarios. The recent introduction of Diffusion Models has enabled OSFL to synthesize client-like data on the server. However, these methods typically require fine-tuning a foundation model or a shared feature extractor on clients, which undermines practicality under the heterogeneous scenarios. To address this limitation, we propose $\textbf{FedLD}$, a one-shot $\textbf{Fed}$erated learning method with $\textbf{L}$ocal $\textbf{D}$istribution-conditioned image synthesis. We fit a Gaussian Mixture Model (GMM) to the local distribution of each client and upload the model parameters to the server. The server samples initial noise from these client-specific models to guide a Diffusion Model to generate data aligned with the client distributions, enabling OSFL without any client-side model training and significantly reducing both computation and communication costs. Quantitation and visualization experiments conducted on three large-scale real-world image datasets demonstrate the initial noises sampled from the GMMs can effectively transfer the knowledge of client distributions, further validating the potential of Diffusion Models in OSFL.
Primary Area: transfer learning, meta learning, and lifelong learning
Submission Number: 23854
Loading