Detecting Learning Dynamics in Graph Transformers via Spectral Deviations

Published: 23 Oct 2025, Last Modified: 02 Nov 2025LOG 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: random matrix theory, graph transformers, spectral analysis
Abstract: Spectral anomalies in the graph shift operators may expose meaningful deviations from expected behavior, offering insights into learned structures, overfitting, or instability in graph transformers. We leverage tools from random matrix theory to identify statistically significant deviations in spectral distributions of graph transformer architectures, using the GraphGPS graph transformer architecture as a case study. Matrices extracted from various stages of the training process, such as attention maps, layer outputs, or learned weights, are analyzed to assess whether statistically significant spectral deviations correspond to high-information components or key learning dynamics. We comment on our preliminary work applying random matrix theory in this domain, which reveals distinct spectral signatures across different phases of model learning and highlights open challenges in extending random matrix theory frameworks to the inherently non-symmetric, sparse matrices found in graph transformers.
Submission Type: Extended abstract (max 4 main pages).
Software: https://github.com/sydneyid/spectral_assess
Poster: png
Submission Number: 120
Loading