The underlying structures of self-attention: symmetry, directionality, and emergent dynamics in Transformer training

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We introduce a new theoretical perspective on self-attention matrices, showing that bidirectional and autoregressive training induces symmetric and directional matrices, respectively. We show how to leverage these structures for boosting performance.
Abstract: Self-attention is essential to Transformer architectures, yet how information is embedded in the self-attention matrices and how different objective functions impact this process remains unclear. We present a mathematical framework to analyze self-attention matrices by deriving the structures governing their weight updates. Using this framework, we demonstrate that bidirectional training induces symmetry in the weight matrices, while autoregressive training results in directionality and column dominance. Our theoretical findings are validated across multiple Transformer models — including ModernBERT, GPT, LLaMA3, and Mistral — and input modalities like text, vision, and audio. Finally, we apply these insights by showing that symmetric initialization improves the performance of encoder-only models on language tasks. This mathematical analysis offers a novel theoretical perspective on how information is embedded through self-attention, thereby improving the interpretability of Transformer models.
Lay Summary: Transformer models, which power popular AI tools like ChatGPT and BERT, rely on a mechanism called self-attention to process information. However, researchers still do not fully understand how this mechanism works internally, especially how different training strategies influence it. In this study, we introduce a mathematical approach to examine the structure of self-attention in Transformers. We find that the way the model is trained significantly affects how attention weights are organized. When trained to consider the full context of a sentence, the model develops symmetric attention patterns. In contrast, when trained to predict one word at a time, it forms directional patterns. We confirm these findings across many well-known models used for language, vision, and audio tasks. We also show that starting with symmetric patterns can help language models learn faster and perform better. This research provides new insights into how attention mechanisms shape learning and opens up ways to make AI models more efficient and interpretable.
Link To Code: https://github.com/matteosaponati/attention-geometry
Primary Area: Deep Learning->Attention Mechanisms
Keywords: Transformer models, Self-attention, Deep Learning Theory, Training Dynamics, Self-supervised training, Initialisation techniques, Mechanistic Interpretability
Submission Number: 6834
Loading