A Random Matrix Analysis of In-context Memorization for Nonlinear Attention

ICLR 2026 Conference Submission14573 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Attention, Transformer, Random Kernel Matrix, Random Matrix Theory, High-dimensional Statistics
TL;DR: We derive precise expression for the in-context memorization error of nonlinear Attention, for input tokens drawn from a signal-plus-noise model.
Abstract: Attention mechanisms have revolutionized machine learning (ML) by enabling efficient modeling of global dependencies across inputs. Their inherently parallelizable structures allow for efficient scaling with the exponentially increasing size of both pretrained data and model parameters. Yet, despite their central role as the computational backbone of modern large language models (LLMs), the theoretical understanding of Attentions, especially in the nonlinear setting, remains limited. In this paper, we provide a precise characterization of the *in-context memorization error* for *nonlinear Attention*, in the high-dimensional regime where the number of input tokens $n$ and their embedding dimension $p$ are both large and comparable. Leveraging recent advances in the theory of large kernel random matrices, we show that nonlinear Attention typically incurs higher memorization error than linear regression on random inputs. However, this gap vanishes, and can even be reversed, when the input exhibits statistical structure, particularly when the Attention weights align with the input signal direction. Our theoretical insights are supported by numerical experiments.
Primary Area: learning theory
Submission Number: 14573
Loading