Gaussian Equivalence for Self-Attention: Asymptotic Spectral Analysis of Attention Matrix
TL;DR: We rigorously analyze the singular value spectrum of attention matrices, proving a Gaussian equivalence and showing that squared singular values deviate from the Marchenko–Pastur law, with a sharp threshold for Gaussian equivalence.
Abstract: Self-attention layers have become fundamental building blocks of modern deep neural networks, yet their theoretical understanding remains limited, particularly from the perspective of random matrix theory.
In this work, we provide a rigorous analysis of the singular value spectrum of the attention matrix and establish the first Gaussian equivalence result for attention. In a natural regime where the inverse temperature remains of constant order, we show that the singular value distribution of the attention matrix is asymptotically characterized by a tractable linear model.
We further demonstrate that the distribution of squared singular values deviates from the Marchenko–Pastur law, which has been believed in previous work.
Our proof relies on two key ingredients: precise control of fluctuations in the normalization term and a refined linearization that leverages favorable Taylor expansions of the exponential.
This analysis also identifies a threshold for linearization and elucidates why attention, despite not being an entrywise operation, admits a rigorous Gaussian equivalence in this regime.
Submission Number: 1114
Loading