A Unifying Framework for Parallelizing Sequential Models with Linear Dynamical Systems

Published: 06 Apr 2026, Last Modified: 06 Apr 2026Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Harnessing parallelism in seemingly sequential models is a central challenge for modern machine learning. Several approaches have been proposed for evaluating sequential processes in parallel using iterative fixed-point methods, like Newton, Picard, and Jacobi iterations. In this work, we show that these methods can be understood within a common framework based on linear dynamical systems (LDSs), where different iteration schemes arise naturally as approximate linearizations of a nonlinear recursion. Moreover, we theoretically analyze the rates of convergence of these methods, and we verify the predictions of this theory with several case studies. This unifying framework highlights shared principles behind these techniques and clarifies when particular fixed-point methods are most likely to be effective. By bridging diverse algorithms through the language of LDSs, the framework provides a clearer theoretical foundation for parallelizing sequential models and points toward new opportunities for efficient and scalable computation.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: The camera ready version! We have updated the date to indicate April, and fixed some minor typos.
Code: https://github.com/lindermanlab/parallelizing_with_lds
Supplementary Material: zip
Assigned Action Editor: ~Yaoliang_Yu1
Submission Number: 5999
Loading