Keywords: Graph Neural Network, Expressiveness, Node-specific parameterization
TL;DR: From Embeddings to Functions: Node-Specific Parameterization for GNNs
Abstract: Graph Neural Networks (GNNs) have emerged as powerful tools for graph learning. Classical message-passing GNNs enforce permutation equivariance at the node level and permutation invariance at the graph level, but these symmetries constrain expressiveness, limiting them to the discriminative power of the 1-WL test. Recent advances such as Graph Transformers extend GNNs with global attention and positional encodings, yet still rely on shared graph-level parameters. In this work, we revisit the symmetry–expressiveness trade-off through node-specific parameterization, where each node contains a small trainable neural network-an approach we term Node2Net. Unlike existing methods that represent each node with a static embedding vector, Node2Net represents each node with a parametric function capable of modeling nonlinear feature interactions and adaptive transformations. Node2Net breaks 1-WL indistinguishability and can act as universal approximators capable of representing arbitrarily complex node-level transformations. Its computational and memory costs scale linearly with the number of nodes and remain practical on standard benchmarks. As a fundamental node representation method, Node2Net is model- and task-agnostic and does not change the transductive or inductive generalization properties of GNN backbones. Extensive experiments on multiple benchmarks demonstrate that Node2Net consistently improves over node feature learning methods, traditional message-passing GNNs, and recent Graph Transformers.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 4913
Loading