Transformers as Unrolled Inference in Probabilistic Laplacian Eigenmaps

Published: 23 Sept 2025, Last Modified: 21 Oct 2025NPGML PosterEveryoneRevisionsBibTeXCC BY-SA 4.0
Keywords: transformers, dimensionality reduction, variational inference, probabilistic interpretation, laplacian eigenmaps, graph
TL;DR: Transformers approximately perform unrolled inference in probabilisitc Laplacian Eigenmaps. This shows that a graph diffusion term is more natural as part of the architecture.
Abstract: We propose a probabilistic interpretation of transformers as unrolled inference steps assuming a probabilistic Laplacian Eigenmaps model from the ProbDR framework. Our derivation shows that at initialisation, transformers perform ``linear'' dimensionality reduction. We also show that within the transformer block, a graph Laplacian term arises from our arguments, rather than an attention matrix (which we interpret as an adjacency matrix). We demonstrate that simply subtracting the identity from the attention matrix (and thereby taking a graph diffusion step) improves validation performance on a language model and a simple vision transformer.
Submission Number: 4
Loading