Keywords: Diffusion Maps, Self-Attention, Magnetic Laplacian, Manifold Learning, Kernel Methods, Random-Walk
Abstract: Transformers, diffusion maps, and magnetic Laplacians are usually treated as separate tools; we show they are all different regimes of a single Markov geometry built from pre-softmax query–key scores. We define a QK “bidivergence” whose exponentiated and normalized forms yield attention, diffusion maps, and magnetic diffusion. And use product-of-experts and Schrödinger-bridges to connect and organize them into equilibrium, non-equilibrium steady-state, and driven dynamics.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 22024
Loading