One Rank at a Time: Cascading Error Dynamics in Sequential Learning

19 Jan 2026 (modified: 30 Apr 2026)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Sequential learning --where complex tasks are broken down into simpler, hierarchical components-- has emerged as a paradigm in AI. This paper views sequential learning through the lens of low-rank linear regression, focusing specifically on how errors propagate when learning rank-1 subspaces sequentially. We present an analysis framework that decomposes the learning process into a series of rank-1 estimation problems, where each subsequent estimation depends on the accuracy of previous steps. Our aim is explanatory rather than comparative: we analyze error propagation and derive compute allocation guidance without claiming superiority over joint or one-shot training. Our contribution is a characterization of the error propagation in this sequential process, establishing bounds on how errors --e.g., due to limited computational budgets and finite precision-- affect the overall model accuracy. We prove that these errors compound in predictable ways, with implications for both algorithmic design and stability guarantees.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~changjian_shui1
Submission Number: 7068
Loading