FasDL: An Efficient Serverless-Based Training Architecture With Communication Optimization and Resource Configuration
Abstract: Deploying distributed training workloads of deep learning models atop serverless architecture alleviates the burden of managing servers from deep learning practitioners. However, when supporting deep model training, the current serverless architecture faces the challenges of inefficient communication patterns and rigid resource configuration that incur subpar and unpredictable training performance. In this paper, we propose FasDL, an efficient serverless-based deep learning training architecture to solve these two challenges. FasDL adopts a novel training framework K-REDUCE to release the communication overhead and accelerate the training. Additionally, FasDL builds a lightweight mathematical model for K-REDUCE training, offering predictable performance and supporting subsequent resource configuration. It achieves the optimal resource configuration by formulating an optimization problem related to system-level and application-level parameters and solving it with a pruning-based heuristic search algorithm. Extensive experiments on AWS Lambda verify a prediction accuracy over 94% and demonstrate performance and cost advantages over the state-of-art architecture LambdaML by up to 16.8% and 28.3% respectively.
External IDs:dblp:journals/tc/ChenCZMB25
Loading