The emergence of clusters in self-attention dynamics

Published: 21 Sept 2023, Last Modified: 08 Jan 2024NeurIPS 2023 posterEveryoneRevisionsBibTeX
Keywords: Transformers, Self-Attention, Clustering, Interacting Particle Systems, Continuous Time
TL;DR: We mathematically prove that trained Transformers cluster in long time.
Abstract: Viewing Transformers as interacting particle systems, we describe the geometry of learned representations when the weights are not time-dependent. We show that particles, representing tokens, tend to cluster toward particular limiting objects as time tends to infinity. Using techniques from dynamical systems and partial differential equations, we show that type of limiting object that emerges depends on the spectrum of the value matrix. Additionally, in the one-dimensional case we prove that the self-attention matrix converges to a low-rank Boolean matrix. The combination of these results mathematically confirms the empirical observation made by Vaswani et al. [ VSP`17 ] that leaders appear in a sequence of tokens when processed by Transformers.
Supplementary Material: pdf
Submission Number: 10803
Loading