Abstract: Multi-hybrid architectures are poised to take over language modeling due to better quality
and performance. We introduce a hierarchical decomposition framework for linear recur-
rences that yields Sliding Window Recurrences, a windowed training mode for recurrence
layers in hybrid models. Unlike sliding-window attention, SWR is derived from the transfer
structure of recurrences: it truncates the carrier system induced by the decomposition while
preserving dense local recurrence dynamics. We focus specifically on hardware-aligned win-
dows which are naturally jagged, limiting costly inter-warp communication. Using SWR,
we develop Phalanx layers for hybrid language models. In 1B parameter multi-hybrid mod-
els, Phalanx achieves over 10-40% speedup across 4K to 16K context length over optimized
Transformers while matching perplexity.
Submission Type: Long submission (more than 12 pages of main content)
Assigned Action Editor: ~Ankit_Singh_Rawat1
Submission Number: 7011
Loading