Keywords: looped transformers, padded transformers, context-free grammars, parsing, formal languages
TL;DR: Looped transformers can recognize all context-free languages but require n^6 padding. With n^3 padding, we can recognize unambiguous context-free languages.
Abstract: Transformers excel on tasks that process well-formed inputs according to some grammar, such as natural language and code. However, it remains unclear how they can process grammatical syntax internally. In fact, under standard complexity conjectures, standard transformers cannot recognize context-free languages (CFLs), a canonical formalism to describe syntax, or even regular languages, a subclass of CFLs. Merrill & Sabharwal (2025) show that $\mathcal{O}(log(n))$ looping layers (w.r.t. input length $n$) allows transformers to recognize regular languages, but the question of context-free recognition remained open. In this work, we show that looped transformers with $\mathcal{O}(log(n))$ looping layers and $\mathcal{O}(n^6)$ padding tokens can recognize all CFLs. However, training and inference with $\mathcal{O}(n^6)$ padding tokens is potentially impractical. Fortunately, we show that, for natural subclasses such as unambiguous CFLs, the recognition problem on transformers becomes more tractable, requiring $\mathcal{O}(n^3)$ padding.
We empirically validate our results and show that looping helps on languages that require logarithmic depth.
Overall, our results shed light on the intricacy of CFL recognition by transformers: while general recognition may require an intractable amount of padding, natural constraints such as unambiguity yield efficient recognition algorithms.
Primary Area: interpretability and explainable AI
Submission Number: 16436
Loading