Balancing Model Performance and Rapid Personalization in Federated Learning with Learning Rate Scheduling

TMLR Paper3465 Authors

09 Oct 2024 (modified: 28 Nov 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Federated learning (FL) is a powerful technique for collaboratively training a single centralized model on distributed local data sources. By aggregating model information without disclosing the local training data, FL preserves data privacy. However, the inherent heterogeneity in local data sets challenges the performance of FL techniques, especially when data is very diverse across local sources. Personalized Federated Learning (PFL) can mitigate this challenge using multiple models but often requires additional memory and computation. This work does not propose a new PFL method but introduces how learning rate decay, within each training round, balances model performance across all local data sets and performance on local data after fine-tuning. We provide theoretical insights and empirical evidence of efficacy across diverse domains, including vision, text, and graph data. Our extensive experiments demonstrate that learning rate scheduling alone outperforms other FL methods for generalization to new data for both new and existing users. Moreover, it performs comparably to PFL methods, particularly regarding new users, while maintaining similar computation and memory requirements as FL techniques.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Tian_Li1
Submission Number: 3465
Loading