Causal Evidence of Stack Representations in Modeling Counter Languages Using Transformers

Published: 01 Apr 2026, Last Modified: 25 Apr 2026ICLR 2026 Workshop LLM ReasoningEveryoneRevisionsBibTeXCC BY 4.0
Track: tiny / short paper (up to 4 pages)
Keywords: Mechanistic Interpretability, Causality, Formal Languages, Transformers
TL;DR: Transformers trained on next-token prediction for Shuffle-K learn stack representations which are causally relevant.
Abstract: Formal languages have proven to be effective conduits to understand the inner mechanisms of transformers. Past work has shown that transformers trained on next-token prediction over counter languages learn representations consistent with an underlying stack structure. Beyond representational analysis, this paper investigates the causal role of these representations. Linear probes are trained to predict the stack depth at each token from the model’s hidden states, and a principal representation direction is extracted from the probe. Ablation of this direction from the model causes sequential accuracy to collapse to near 0\%, providing strong empirical evidence that the stack representation is not just learned, but is causally necessary for model performance.
Anonymization: This submission has been anonymized for double-blind review via the removal of identifying information such as names, affiliations, and identifying URLs.
Submission Number: 185
Loading