Dynamic and Fast Convergence for Federated Learning via Optimized Hyperparameters

Published: 01 Jan 2025, Last Modified: 04 Nov 2025IEEE Trans. Netw. Serv. Manag. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Federated Learning (FL) is a privacy-preserving computing paradigm that enables participants to collaboratively train a global model without exchanging their raw personal data. Due to frequent communication and data heterogeneity of devices with unique local data distributions, FL faces a significant issue with slow convergence speed. To achieve fast convergence, existing methods adjust hyperparameters in FL to reduce the volume of model updates, the number of participating devices, and local iterations. However, most focus on only part of the hyperparameters and primarily rely on analytical optimization. A more integrated and dynamic coordination of all hyperparameters is needed. To address this issue, we first propose an efficient FL framework enabled by rand-m sparsification and stochastic quantization methods. For this framework, we conduct a rigorous theoretical analysis to explore the trade-offs among quantization level, sparsification level, device participation, and local iteration. To improve convergence speed, we also design a Deep Reinforcement Learning (DRL)-based strategy to dynamically coordinate these hyperparameters. Experimental results show that our method can improve convergence speed by at least 8% compared to the existing approaches.
Loading