Keywords: weight sharing, graph neural networks, message passing, graph invariants
TL;DR: We introduce weight sharing for irregular data and provide an instatiation called ShareGNN which is applied to graph classification and regression.
Abstract: Weight sharing is a key principle of machine learning.
While well established for regular domains such as images, extending weight sharing to graphs remains challenging due to their inherent irregularity.
We address this gap with a novel weight-sharing paradigm that indexes weights directly by graph invariants, i.e., functions preserved under node permutations.
This formulation enables systematic reuse of parameters across structurally equivalent subgraphs, providing a principled mechanism for permutation-aware learning.
To demonstrate the practicality of the approach, we introduce ShareGNNs, a new family of permutation-invariant graph neural networks that instantiate
invariant-based weight sharing in a simple encoder-decoder design.
We prove that the expressivity of ShareGNNs is lower-bounded by the discriminative power of the chosen invariant, allowing dynamic control of complexity.
Experiments on subgraph counting, synthetic, and real-world benchmarks show that ShareGNNs achieve competitive performance on graph-level classification and regression tasks while using only one message-passing layer.
Moreover, we discuss how the approach enhances interpretability and transferability.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 10772
Loading