Toward Efficient and Scalable Asynchronous Federated Learning via Stragglers Version Control

Chuyi Chen, Yanchao Zhao, Zhe Zhang, Wenzhong Li, Jie Wu

Published: 01 Feb 2026, Last Modified: 12 Jan 2026IEEE Transactions on Mobile ComputingEveryoneRevisionsCC BY-SA 4.0
Abstract: Asynchronous Federated Learning (AFL) has emerged as a promising paradigm to address the challenges posed by heterogeneous device environments in federated learning systems. However, the problem of low accuracy and slow convergence due to inconsistent updates from stragglers and normal clients remains severe in AFL. Previous works either discard or penalize the updates from stragglers, which can lead to the loss of valuable data or introduce bias into the model. Furthermore, existing AFL frameworks integrating synchronous optimization algorithms face the challenges of weak compatibility and scalability, limiting large-scale training. In this paper, we propose DVAFL, an efficient and scalable AFL framework that significantly improves the performance of AFL in terms of model accuracy and convergence speed by effectively utilizing and compensating for the updates from stragglers, while naturally integrating synchronous optimization algorithms. Specifically, DVAFL introduces a dynamic window protocol for adaptive aggregation to balance the contribution of stragglers and ensure faster and more stable convergence. Further, the version control mechanism corrects stale gradients by compensating for the missed model updates of stragglers, thereby improving model performance. Extensive experiments on three public datasets demonstrate that DVAFL achieves an average convergence speed 2.3× faster and an accuracy improvement of 5.5% compared to state-of-the-art AFL methods.
Loading