Keywords: Random-Matrix Theory, Multi-Head Latent Attention, Spectral spikes, Spectral Analysis
TL;DR: We perform an RMT analysis of multi-head latent attention and find that head-shared rotary embeddings eliminate spectral spikes while classical and width-compressed designs still suffer rank collapse.
Abstract: In this work, we study how multi-head latent attention (MLA), a popular strategy for compressing key/value memory, affects a transformer's internal capacity during pretraining. Using a lightweight suite of Marchenko–Pastur (MP) diagnostics, we analyze the spectrum of the $QK^\top$ Gram matrix throughout training, comparing three variants: the standard multi-head attention (MHA) baseline, MLA-PreRoPE with rotary applied before compression, and MLA-Decoupled, which shares a single rotary sub-vector across all heads. Our random matrix analysis reveals {\bf three key findings}. First, capacity bottlenecks emerge locally: both MHA and MLA-PreRoPE exhibit sharp, early spikes in specific layers that persist and propagate, disrupting the balance between bulk and outlier directions. Second, these spikes coincide with rank collapse, concentrating the model's expressivity into narrow subspaces. Third, only the decoupled variant prevents this cascade, maintaining broad spectral support and suppressing outlier formation across layers. These results underscore that \emph{how} rotary embeddings are applied is just as critical as \emph{where} compression occurs. Sharing rotary components across heads mitigates spectral fragmentation and preserves representational capacity.
Student Paper: Yes
Submission Number: 103
Loading