Representation Invariance of GNNs: Going Beyond Isomorphism

NeurIPS 2025 Workshop NeurReps Submission97 Authors

30 Aug 2025 (modified: 29 Oct 2025)Submitted to NeurReps 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Representation invariance, Equivariance, Information-preserving transformations, Graph neural networks
TL;DR: Non-isomorphic graphs may represent the same information. we show current GNNs aren’t invariant to such transformations. We plan to develop GNN architectures that are invariant to information-preserving transformations.
Abstract: Graph Neural Networks (GNNs) leverage the topology of graphs to learn informative rep- resentations. Since isomorphic graphs represent the same information, there has been a significant effort to develop GNNs that return the same results over isomorphic graphs. It is known in the data management community that the same data can be represented in distinct nonisomorphic forms, e.g., normalization and denormalization of relational or graph data. In this paper, we postulate that GNNs should be invariant against these types of variations in data representations. We formalize the notion of invariance using con- cepts in schema management, e.g., mapping and constraints. We report our preliminary results, which indicate that GNNs are not often robust under these variations.
Submission Number: 97
Loading