Randomly Sampled Language Reasoning Problems Reveal Limits of LLMs

Published: 06 Mar 2025, Last Modified: 20 Apr 2025ICLR 2025 Workshop VerifAI PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: large language models, evaluation, regular languages, benchmarks
TL;DR: We develop a benchmark for LLMs using tasks related to languages recognized by DFAs, and find that LLM in-context learning underperforms simple ngram heuristics on these tasks.
Abstract: Can LLMs pick up language structure from examples? Evidence in prior work seems to indicate yes, as pretrained models repeatedly demonstrate the ability to adapt to new language structures. However, this line of research typically considers languages that are present within common pretraining datasets, or otherwise share notable similarities with seen languages. In contrast, in this work we attempt to measure models' language understanding capacity while circumventing the risk of dataset recall. We parameterize large families of language tasks recognized by deterministic finite automata (DFAs), and can thus sample novel language reasoning problems to fairly evaluate LLMs regardless of training data. We find that, even in the strikingly simple setting of 3-state DFAs, LLMs underperform unparameterized ngram models on both language recognition and synthesis tasks. These results suggest that LLMs struggle to match the ability of basic language models in recognizing and reasoning over languages that are sufficiently distinct from the ones seen at training time, underscoring the distinction between learning individual languages and possessing a general theory of language.
Submission Number: 4
Loading