DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Scaling multi-dimensional transformers to long sequences is indispensable across various domains. However, the challenges of large memory requirements and slow speeds of such sequences necessitate sequence parallelism. All existing approaches fall under the category of embedded sequence parallelism, which are limited to shard along a single sequence dimension, thereby introducing significant communication overhead. However, the nature of multi-dimensional transformers involves independent calculations across multiple sequence dimensions. To this end, we propose Dynamic Sequence Parallelism (DSP) as a novel abstraction of sequence parallelism. DSP dynamically switches the parallel dimension among all sequences according to the computation stage with efficient resharding strategy. DSP offers significant reductions in communication costs, adaptability across modules, and ease of implementation with minimal constraints. Experimental evaluations demonstrate DSP's superiority over state-of-the-art embedded sequence parallelism methods by remarkable throughput improvements ranging from 32.2% to 10x, with less than 25% communication volume.
Lay Summary: Making AI Faster with Complex Information Modern AI often deals with very long sequences of information, like lengthy documents or videos. Processing this complex data can be slow and require a lot of computer memory. Current methods try to speed this up by dividing the work, but they're often rigid, only splitting the data in one way. This can cause slowdowns as different computer parts shuffle information back and forth. Our new approach, Dynamic Sequence Parallelism (DSP), is much more flexible. It intelligently changes how it divides the data based on the specific task the AI is performing at that moment. This smart, adaptive splitting significantly reduces the data shuffling. The result? DSP makes AI systems 32.2% to 10 times faster at handling long, complex information, all while using less than a quarter of the communication compared to older methods. This allows for more powerful and efficient AI.
Primary Area: Optimization->Large Scale, Parallel and Distributed
Keywords: Sequence Parallelism, Sequence Parallel, Long Sequence, High Performance Computing, Distributed System, Multi-Dimentional Transformer
Submission Number: 3238
Loading