Structure Development in List-Sorting Transformers

TMLR Paper3613 Authors

01 Nov 2024 (modified: 27 Jan 2025)Rejected by TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: We study how a one-layer attention-only transformer develops relevant structures while learning to sort lists of numbers. At the end of training, the model organizes its attention heads in two main modes that we refer to as vocabulary-splitting and copy-suppression. Both represent simpler modes than having multiple heads handle overlapping ranges of numbers. Interestingly, vocabulary-splitting is present regardless of whether we use weight decay, a common regularization technique thought to drive simplification, supporting the thesis that neural networks naturally prefer simpler solutions. We relate copy-suppression to a mechanism in \texttt{GPT-2} and investigate its functional role in our model. Guided by insights from a developmental analysis of the model, we identify features in the training data that drive the model’s acquired final solution. This provides a concrete example of how the training data shape the internal organization of transformers, paving the way for future studies that could help us better understand how LLMs develop their internal structures.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: We clarified the definition of $\delta$ and $\overline{\delta}$ on page 4.
Assigned Action Editor: ~Laurent_Charlin1
Submission Number: 3613
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview