Toward Competitive Serverless Deep Learning

Published: 03 Nov 2023, Last Modified: 21 Nov 2023DICG 2023EveryoneRevisionsBibTeX
Keywords: Serverless, GPU acceleration, Machine Learning
Abstract: Machine learning is becoming a key technology to make systems smarter and more powerful. Unfortunately, training large and capable ML models is resource-intensive and requires high operational skills. Serverless computing is an emerging paradigm for structuring applications to benefit from on-demand computing resources and achieve horizontal scalability while making resources easier to consume. As such, it is an ideal substrate for the resource-intensive and often ad-hoc task of training deep learning models and has a strong potential to democratize access to ML techniques. However, the design of serverless platforms makes deep learning training difficult to translate efficiently to this new world. Apart from the intrinsic communication overhead (serverless functions are stateless), serverless training is limited by the reduced access to GPUs, which is especially problematic for running deep learning workloads, known to be notoriously demanding. To address these limitations, we present KubeML, a purpose-built deep learning system for serverless computing. KubeML fully embraces GPU acceleration while reducing the inherent communication overhead of deep learning workloads to match the limited capabilities of the serverless paradigm. In our experiments, we are able to outperform TensorFlow for smaller local batches, reach a 3.98x faster time-to-accuracy in these cases, and maintain a 2.02x speedup for commonly benchmarked machine learning models like ResNet34.
Submission Number: 7
Loading