FedAST: Federated Asynchronous Simultaneous Training

Published: 26 Apr 2024, Last Modified: 15 Jul 2024UAI 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Asynchronous Federated Learning, Simultaneous Training, Federated Learning with Multiple Models, Non-convex Optimization
TL;DR: We propose FedAST, a buffered asynchronous approach for federated simultaneous training of multiple models, to address the straggler and resource allocation problem.
Abstract: Federated Learning (FL) enables edge devices or clients to collaboratively train machine learning (ML) models without sharing their private data. Much of the existing work in FL focuses on efficiently learning a model for a single task. In this paper, we study simultaneous training of multiple FL models using a common set of clients. The few existing simultaneous training methods employ synchronous aggregation of client updates, which can cause significant delays because large models and/or slow clients can bottleneck the aggregation. On the other hand, a naive asynchronous aggregation is adversely affected by stale client updates. We propose FedAST, a buffered asynchronous federated simultaneous training algorithm that overcomes bottlenecks from slow models and adaptively allocates client resources across heterogeneous tasks. We provide theoretical convergence guarantees for FedAST for smooth non-convex objective functions. Extensive experiments over multiple real-world datasets demonstrate that our proposed method outperforms existing simultaneous FL approaches, achieving up to 46.0% reduction in time to train multiple tasks to completion.
Supplementary Material: zip
List Of Authors: Askin, Baris and Sharma, Pranay and Joe-Wong, Carlee and Joshi, Gauri
Latex Source Code: zip
Signed License Agreement: pdf
Code Url: https://github.com/askinb/FedAST/tree/main
Submission Number: 636
Loading