Implicit Language Models are RNNs: Balancing Parallelization and Expressivity

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 spotlightposterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: Implicit SSMs bridge RNN expressiveness and transformer parallelization by iterating transformations to approximate fixed points, enabling scalable training and improved performance on state-tracking tasks and large-scale language modeling.
Abstract: State-space models (SSMs) and transformers dominate the language modeling landscape. However, they are constrained to a lower computational complexity than classical recurrent neural networks (RNNs), limiting their expressivity. In contrast, RNNs lack parallelization during training, raising fundamental questions about the trade off between parallelization and expressivity. We propose implicit SSMs, which iterate a transformation until convergence to a fixed point. Theoretically, we show that implicit SSMs implement the non-linear state-transitions of RNNs. Empirically, we find that only approximate fixed-point convergence suffices, enabling the design of a scalable training curriculum that largely retains parallelization, with full convergence required only for a small subset of tokens. Our approach demonstrates superior state-tracking capabilities on regular languages, surpassing transformers and SSMs. We further scale implicit SSMs to natural language reasoning tasks and pretraining of large-scale language models up to 1.3B parameters on 207B tokens - representing, to our knowledge, the largest implicit model trained to date. Notably, our implicit models outperform their explicit counterparts on standard benchmarks. Our code is publicly available at github.com/microsoft/implicit_languagemodels
Lay Summary: Current state-of-the-art language models are pretrained over a large number of tokens requiring token-parallel training. However, it is known in the literature that the architectures enabling token-parallelism come with an inherent weakness, limiting what thought-patterns or algorithms such models can express internally. Most noticeably, it limits their ability to keep track of sustained changes to their environment. We seek to alleviate this weakness by providing language models with an internal and hidden monologue for every token, which, as we show, qualitatively increases the expressiveness of such models. We then validate our results using larger models and show improvements on various benchmarks, particularly on benchmarks that require language models to keep a collection of entities in their mind. Our research demonstrably improves generalization-abilities of language models, enabling them to solve problems that they have not seen before.
Link To Code: github.com/microsoft/implicit_languagemodels
Primary Area: Deep Learning->Algorithms
Keywords: State-space models, deep equilibrium models, RNN, transformer, large-language models, sequence models, regular languages, Chomsky hierarchy
Submission Number: 6534
Loading