Keywords: Laplacian Positional Encoding; Node Identifiability; Graph Neural-Process
Abstract: Message-passing GNNs are 1-WL limited and can collapse distinct nodes on symmetric graphs, which in Graph Neural Processes leads to intrinsic posterior ambiguity and a non-vanishing Bayes-risk floor for node localization. We prove that Laplacian spectral positional information breaks this identifiability barrier, establishing a sample-complexity separation on random $r$-regular graphs: constant-shot identifiability is achievable with spectral coordinates, while WL-bounded GNPs fail in the sublogarithmic regime. The proof links shortest-path observations to diffusion geometry in a logarithmic tree-like window, applies constant-anchor trilateration, and uses quantitative spectral injectivity with logarithmic-size coordinates. Empirically, we adopt the practical choice of concatenating a few raw Laplacian eigenvectors and observe improved accuracy and faster optimization on drug-drug interaction prediction.
Paper Type: Long
Research Area: Mathematical, Symbolic, Neurosymbolic, and Logical Reasoning
Research Area Keywords: Machine Learning for NLP, Interpretability and Analysis of Models for NLP
Contribution Types: Model analysis & interpretability, Approaches low compute settings-efficiency
Languages Studied: Not applicable
Submission Number: 6129
Loading