Transformers Pretrained on Procedural Data Contain Modular Structures for Algorithmic Reasoning

Published: 10 Jun 2025, Last Modified: 15 Jul 2025MOSS@ICML2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Inductive Biases, Procedural Data, Algorithmic Reasoning, Pre-training, Transformers
TL;DR: Different types of procedural data teach small transformers distinct but complementary reasoning skills by instilling inductive structures.
Abstract: $\textbf{Context.}$ Pretraining on large, semantically rich datasets is key for developing language models. Surprisingly, recent studies have shown that even synthetic data, generated procedurally through simple semantic-free algorithms, can yield some of the same benefits as natural language pretraining. It is unclear $\textit{what}$ specific capabilities such simple synthetic data instils in a model, $\textit{where}$ these capabilities reside in the architecture, and $\textit{how}$ they manifest within its weights. $\textbf{Findings.}$ In this short paper, we identify several beneficial forms of procedural data, together with specific algorithmic reasoning skills that improve in small transformers. Our core finding is that different procedural rules instil $\textit{distinct but complementary inductive structures}$ in the model. With extensive ablations and partial-transfer experiments, we discover that these structures reside in different parts of the model. Attention layers often carry the most transferable information, but some pretraining rules impart useful structure to MLP blocks instead. Most interestingly, the structures induced by multiple rules can be composed to jointly reinforce multiple capabilities. $\textbf{Implications.}$ These results suggest an exciting possibility of disentangling the acquisition of knowledge from reasoning in language models, with the goal of improving their robustness and data efficiency.
Code: zip
Submission Number: 6
Loading