FedDecay: Adapting to Data Heterogeneity in Federated Learning With Gradient Decay

19 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: federated learning, meta learning
Abstract: Federated learning is a powerful technique for collaboratively training a centralized model on distributed local data sources, preserving data privacy by aggregating model information without disclosing the local training data. However, the inherent diversity in local data sets challenges the performance of traditional single-model-based techniques, especially when data is not identically distributed across sources. Personalized models can mitigate this challenge but often have additional memory and computation costs. In this work, we introduce FedDecay, a novel approach that enhances single-model-based federated learning by incorporating gradient decay into local updates within each training round. FedDecay adapts the gradient during training by introducing a tunable hyper-parameter, striking a balance between initial model success and fine-tuning potential. We provide both theoretical insights and empirical evidence of FedDecay's efficacy across diverse domains, including vision, text, and graph data. Our extensive experiments demonstrate that FedDecay outperforms other single-model-based methods regarding generalization performance for new and existing users. This work highlights the potential of tailored gradient adjustments to bridge the gap between personalized and single-model federated learning techniques, advancing the efficiency and effectiveness of decentralized learning scenarios.
Supplementary Material: zip
Primary Area: general machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1949
Loading