HSDP: Accelerating Large-scale Model Training via Efficient Sharded Data Parallelism

Published: 2024, Last Modified: 07 Jan 2026ISPA 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large deep neural network (DNN) models have demonstrated exceptional performance across diverse downstream tasks. Sharded data parallelism (SDP) has been widely used to reduce the memory footprint of model states. In a DNN training cluster, a device usually has multiple inter-device links that connect to other devices, like NVLink and InfiniBand. However, existing SDP approaches employ a single link at any given time, encountering challenges in efficient training due to significant communication overheads. We observe that the inter-device links can work independently without affecting each other. To reduce the fatal communication overhead of distributed training of large DNNs, this paper introduces HSDP, an efficient SDP training approach that enables the simultaneous utilization of multiple inter-device links. HSDP partitions models in a novel fine-grained manner and orchestrates the communication processes of partitioned parameters while considering inter-device links. This design enables concurrent communication execution and reduces communication overhead. To further optimize the training performance of HSDP, we propose a HSDP planner. The HSDP planner first abstracts the model partition and execution of HSDP into a communication parallel strategy, and builds a cost model to estimate the performance of each strategy. We then formulate the strategy searching as an optimization problem and solve it with an off-the-shelf solver. Evaluations on representative DNN workloads demonstrate that HSDP achieves up to 1.30× speedup compared to the state-of-the-art SDP training approaches.
Loading