Sign and Basis Invariant Networks for Spectral Graph Representation LearningDownload PDF

Published: 25 Mar 2022, Last Modified: 20 Oct 2024GTRL 2022 PosterReaders: Everyone
Keywords: Invariance, equivariance, graph neural networks, spectral graph representation learning
TL;DR: We propose neural networks invariant to the symmetries of eigenvectors; they are theoretically expressively powerful, and empirically successful at learning graph positional encodings
Abstract: Many machine learning tasks involve processing eigenvectors derived from data. Especially valuable are Laplacian eigenvectors, which capture useful structural information about graphs and other geometric objects. However, ambiguities arise when computing eigenvectors: for each eigenvector v, the sign flipped -v is also an eigenvector. More generally, higher dimensional eigenspaces contain infinitely many choices of eigenvector bases. In this work we introduce SignNet and BasisNet --- new neural architectures that are invariant to all requisite symmetries and hence process collections of eigenspaces in a principled manner. Our networks are universal, i.e., they can approximate any continuous function of eigenvectors with the proper invariances. They are also theoretically strong for graph representation learning --- they can provably approximate any spectral graph convolution, spectral invariants that go beyond message passing neural networks, and other graph positional encodings. Experiments show the strength of our networks for learning spectral graph filters and learning graph positional encodings.
Poster: png
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 3 code implementations](https://www.catalyzex.com/paper/sign-and-basis-invariant-networks-for/code)
1 Reply

Loading