FedRKMGC: Towards High-Performance Gradient Correction-based Federated Learning via Relaxation and Fast KM Iteration
Keywords: Federated Learning; Gradient Correction; KM iteration
Abstract: Federated learning (FL) enables multiple clients to collaboratively train machine learning models without sharing their local data, providing clear advantages in terms of privacy and scalability. However, existing FL algorithms often exhibit slow convergence, particularly under heterogeneous data distributions, resulting in high communication costs. To mitigate this, we propose FedRKMGC, a novel federated learning framework that integrates Gradient Correction with the classical Relaxation strategy and the fast Krasnosel'ski\u{\i}--Mann (KM) acceleration method to enhance convergence. Specifically, the fast KM technique is applied during local training to speed up client updates, while a relaxation step is introduced during server aggregation to further accelerate global iterations. By integrating these complementary mechanisms, FedRKMGC effectively mitigates client drift and accelerates convergence, improving both training stability and communication efficiency. Extensive experiments on standard FL benchmarks demonstrate that FedRKMGC consistently achieves superior convergence performance and substantial communication savings compared to the existing state-of-the-art FL methods.
Supplementary Material: zip
Primary Area: optimization
Submission Number: 4609
Loading