Navigating Heterogeneity and Privacy in One-Shot Federated Learning with Diffusion Models

Published: 01 Jan 2025, Last Modified: 20 May 2025WACV 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated learning (FL) enables multiple clients to train models collectively while preserving data privacy. However, FL faces challenges in terms of communication cost and data heterogeneity. One-shot federated learning has emerged as a solution by reducing communication rounds, improving efficiency, and providing better security against eavesdropping attacks. Nevertheless, data heterogeneity remains a significant challenge, impacting performance. This work explores the effectiveness of diffusion models in oneshot FL, demonstrating their applicability in addressing data heterogeneity and improving FL performance. Additionally, we investigate the utility of our diffusion model approach, FedDiff, compared to other one-shot FL methods under differential privacy (DP). Furthermore, to improve generated sample quality under DP settings, we propose a pragmatic Fourier Magnitude Filtering (FMF) method, enhancing the effectiveness of the generated data for global model training. Code available at https://github.com/mmendiet/FedDiff.
Loading