Range-aware Positional Encoding via High-order Pretraining: Theory and Practice

Published: 23 Oct 2024, Last Modified: 24 Feb 2025NeurReps 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph Neural Networks, Equivariant Autoencoder, Positional Encoding
TL;DR: We developed a novel-pretraining strategy for positional encoding on graphs
Abstract: Based on Wavelet Positional Encoding of Ngo et.al., we propose $\textbf{HOPE-WavePE}$ ($\textbf{H}$igh-$\textbf{O}$rder $\textbf{P}$ermutation $\textbf{E}$quivariant $\textbf{Wave}$let $\textbf{P}$ositional $\textbf{E}$ncoding) a novel pre-training strategy for positional encoding that is equivariant under the permutation group and is sensitive to the length and diameter of graphs downstream tasks. Since our approach relies solely on the graph structure, it is domain-agnostic and adaptable to datasets from various domains, therefore paving the wave for developing general graph structure encoders and graph foundation models. We theoretically demonstrate that such equivariant pretraining schema can approximate the training target for abitrarily small tolerance. We also evaluate HOPE-WavePE on graph-level prediction tasks of different areas and show its superiority compared to other methods. We release our source code upon the acceptance.
Submission Number: 72
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview