Keywords: State Tracking, state space, mamba, Linear RNN, Linear Attention, GLA, DeltaNet, Formal Languages
TL;DR: We show that expanding the eigenvalue range of Linear RNN from [0, 1] to [-1,1] enhances their state-tracking capabilities, enabling them to solve complex tasks like parity and modular counting, while preserving their efficiency in language modeling.
Abstract: Linear Recurrent Neural Networks (LRNNs), such as Mamba, RWKV, GLA, mLSTM, and DeltaNet have emerged as efficient alternatives to transformers in large language modeling, offering linear scaling with sequence length and improved training efficiency. However, LRNNs struggle with state-tracking which is important for, e.g., code comprehension or tracking chess pieces across a board. Even parity, the simplest state-tracking task, which non-linear RNNs like LSTMs handle effectively, cannot be solved by current LRNNs. Recently, Sarrof et al. (2024) demonstrated that the failure of LRNNs like Mamba to solve parity stems from restricting the eigenvalue range of their diagonal state-transition matrices to $[0, 1]$, and that incorporating negative eigenvalues can resolve this issue. We generalize this result to full matrix LRNNs, which have recently shown promise in models such as DeltaNet. We prove that no finite-precision LRNN with state-transition matrices having only positive eigenvalues can solve parity, while complex eigenvalues are needed to count modulo $3$. Notably, we also prove that LRNNs can learn any regular language when their state-transition matrices are products of identity plus vector outer product matrices with eigenvalues in the range $[-1, 1]$. Our empirical results confirm that extending the eigenvalue range of models like Mamba and DeltaNet to include negative values not only enables them to solve parity but consistently improves their performance on state-tracking tasks. Furthermore, pre-training LRNNs with an extended eigenvalue range for language modeling achieves comparable performance and stability while showing promise for coding tasks. Our work enhances the expressivity of modern LRNNs, broadening their applicability without changing the cost of training or inference.
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10504
Loading