FedPMVR: Addressing Data Heterogeneity in Federated Learning through Partial Momentum Variance Reduction

27 Sept 2024 (modified: 15 Nov 2024)ICLR 2025 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Federated Learning, Data Heterogeneity, Variance Reduction, Image Classification.
TL;DR: FedPMVR
Abstract: Federated learning (FL) emerges as a promising paradigm for training machine learning models on decentralized data sources while preserving privacy. However, the presence of not independent and identically distributed (non-IID) data among the clients introduces high variance in gradient updates, posing a significant challenge to the global model's performance in terms of accuracy and convergence. To mitigate the adverse effects of data heterogeneity, we propose a novel momentum-based partial variance reduction technique. Our approach adjusts the gradient updates for the final classification layers of the client's neural network by leveraging the gradient differences between local and global models. This adjustment aims to effectively capture and mitigate client drift, a key challenge arises from the presence of non-IID data distributions across clients. We systematically explains client drifts and conduct extensive experiments on three widely-used datasets, demonstrating that our method significantly enhances global model accuracy while reducing the communication rounds needed for convergence. Notably, our momentum-based partial variance reduction technique provides a robust mechanism, rendering more efficient and effective in scenarios with inherently non-IID and heterogeneous data distributions. By addressing the critical challenge of data heterogeneity in FL, our proposed approach paves the way for more reliable and accurate model training while preserving the privacy of decentralized data sources. The code is available at the following link {https://anonymous.4open.science/r/FedPMVR-33C1}.
Primary Area: applications to computer vision, audio, language, and other modalities
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10438
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview