Bayes optimal learning of attention-indexed models

Published: 09 Jun 2025, Last Modified: 09 Jun 2025HiLD at ICML 2025 OralEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Bayes optimal analysis, Transformers, Self-attention mechanism, Random matrix theory, Matrix models, Extensive-rank analysis
TL;DR: We introduce and analyze the Attention-Indexed Model (AIM), a theoretical framework for analyzing learning in deep attention layers.
Abstract: We introduce the Attention-Indexed Model (AIM), a theoretical framework for analyzing learning in deep attention layers. Inspired by multi-index models, AIM captures how token-level outputs emerge from layered bilinear interactions over high-dimensional embeddings. Unlike prior tractable attention models, AIM allows full-rank key and query matrices, aligning more closely with practical transformers. Using tools from statistical mechanics and random matrix theory, we derive closed-form predictions for Bayes-optimal generalization error and identify sharp phase transitions as a function of sample complexity, model width, and sequence length. We propose a matching Approximate Message Passing algorithm and show that gradient descent can reach optimal performance. AIM offers a solvable playground for understanding learning in modern attention architectures.
Student Paper: Yes
Submission Number: 97
Loading