State space models can express $n$-gram languages

Published: 07 Mar 2025, Last Modified: 07 Mar 2025Accepted by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent advancements in recurrent neural networks (RNNs) have reinvigorated interest in their application to natural language processing tasks, particularly with the development of more efficient and parallelizable variants known as state space models (SSMs), which have shown competitive performance against transformer models while maintaining a lower memory footprint. While RNNs and SSMs (e.g., Mamba) have been empirically more successful than rule-based systems based on $n$-gram models, a rigorous theoretical explanation for this success has not yet been developed, as it is unclear how these models encode the combinatorial rules that govern the next-word prediction task. In this paper, we construct state space language models that can solve the next-word prediction task for languages generated from $n$-gram rules, thereby showing that the former are more expressive. Our proof shows how SSMs can encode $n$-gram rules using new theoretical results on their memorization capacity, and demonstrates how their context window can be controlled by restricting the spectrum of the state transition matrix. We conduct experiments with a small dataset generated from $n$-gram rules to show how our framework can be applied to SSMs and RNNs obtained through gradient-based optimization.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We have made a few revisions to address feedback from the reviewers. We have also added a few citations, and a link to our code repository.
Code: https://github.com/tmllab/2025_TMLR_SSM-ngrams
Supplementary Material: zip
Assigned Action Editor: ~Razvan_Pascanu1
Submission Number: 3741
Loading