Spectral Edge Encoding - SEE: Does Structural Information Really Enhance Graph Transformer Performance?
Abstract: We propose Spectral Edge Encoding (SEE), a parameter-free framework that quantifies each edge's contribution to the global structure by measuring spectral shifts in the Laplacian eigenvalues. SEE captures the low-frequency sensitivity of edges and integrates these scores into graph Transformer attention logits as a structure-aware bias. When applied to the Moiré Graph Transformer (MoiréGT) and evaluated on seven MoleculeNet classification benchmarks, SEE consistently improves ROC-AUC performance. In particular, MoiréGT+SEE achieves an average ROC-AUC of 85.3%, approximately 7.1 percentage points higher than the previous state-of-the-art model UniCorn (78.2%). Moreover, SEE preserves molecular topology and enables edge-level interpretability, offering a practical alternative to sequence-based chemical language models. These results demonstrate that spectrum-informed attention can simultaneously enhance performance and transparency in graph-based molecular modeling.
External IDs:doi:10.1145/3746252.3760906
Loading