Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel TrainingDownload PDFOpen Website

Published: 01 Jan 2021, Last Modified: 16 May 2023CoRR 2021Readers: Everyone
Abstract: The success of Transformer models has pushed the deep learning model scale to billions of parameters. Due to the limited memory resource of a single GPU, However, the best practice for choosing the optimal parallel strategy is still lacking, since it requires domain expertise in both deep learning and parallel computing. The Colossal-AI system addressed the above challenge by introducing a unified interface to scale your sequential code of model training to distributed environments. It supports parallel training methods such as data, pipeline, tensor, and sequence parallelism, as well as heterogeneous training methods integrated with zero redundancy optimizer. Compared to the baseline system, Colossal-AI can achieve up to 2.76 times training speedup on large-scale models.
0 Replies

Loading