SkipPipe: Partial and Reordered Pipelining Framework for Training LLMs in Heterogeneous Networks

18 Sept 2025 (modified: 14 Jan 2026)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: distributed, pretraining
Abstract: Data and pipeline parallelism are ubiquitous for training of Large Language Models (LLM) on distributed nodes. The need for cost-effective training has lead recent work to explore efficient communication arrangement for end to end training. Motivated by LLM's resistance to layer skipping and layer reordering, in this paper we explore stage (several consecutive layers) skipping in pipeline training, and challenge the conventional practice of sequential pipeline execution. We derive convergence and throughput constraints (guidelines) for pipelining with skipping and swapping pipeline stages. Based on these constraints, we propose SkipPipe, the first partial pipeline framework to reduce the end-to-end training time for LLMs with negligible effect on convergence, which we verify analytically and empirically. The core of SkipPipe is a path scheduling algorithm that optimizes the paths for individual microbatches and reduces their end-to-end execution time, complying with the given stage skipping ratio. We extensively evaluate SkipPipe on LLaMa models from 500M to 1.5B parameters on up to 20 nodes, through emulation and deployment prototypes. Our results show that SkipPipe reduces training iteration time by up to 50\% compared to full pipeline. Additionally, our partial pipeline training also improves resistance to layer omission during inference, experiencing a drop in perplexity of only 2\% when running only 75\% of the model.
Primary Area: other topics in machine learning (i.e., none of the above)
Submission Number: 11835
Loading