Asynchronous Federated Learning Through Online Linear Regressions

Published: 01 Jan 2024, Last Modified: 16 Feb 2025IEEE Access 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: In the practical scenario of Federated Learning (FL), clients upload their local model to a server at different times owing to heterogeneity in the clients’ device environment. Therefore, Asynchronous Federated Learning (AFL) has been aggressively studied recently. Although the initial motivation for AFL is to reduce the difficulties of FL, AFL itself also has problems in practice (that is, the latest uploaded local model affects the performance of the global model). As a scientific challenge, we would like to develop a simple yet broadly applicable concept for AFL; thus, we revisited a classic machine learning theory and found a high conceptual affinity between AFL and online linear regression—both are problem formulations for sequential inputs. On the basis of this underlying philosophy, we propose a framework for AFL in which classic online linear regression techniques are utilized in server aggregation to realize an asynchronous global model update. Our framework can provide a convergence guarantee of the global model performance relative to a synchronized aggregation method through the theoretical guarantee in online linear regressions. The experiments show the superiority of our asynchronous updates in a server in both image and text tasks under non-IID data.
Loading