Scale-aware Message Passing for Graph Node Classification

05 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Directed Graph, Message Passing Neural Network, Multi-Scale, Scalability
TL;DR: Multi-scale learning on Graph
Abstract: Most Graph Neural Networks (GNNs) operate at the first-order scale, even though multi-scale representations are known to be crucial in domains such as image classification. In this work, we investigate whether GNNs can similarly benefit from multi-scale learning, rather than being limited to a fixed depth of $k$-hop aggregation. We begin by formalizing scale invariance in graph learning, providing theoretical guarantees and empirical evidence for its effectiveness. Building on this principle, we introduce ScaleNet, a scale-aware message-passing architecture that combines directed multi-scale feature aggregation with an adaptive self-loop mechanism. ScaleNet achieves state-of-the-art performance on six benchmark datasets, covering both homophilic and heterophilic graphs. To handle scalability, we further propose LargeScaleNet, which extends multi-scale learning to large graphs and sets new state-of-the-art results on three large-scale benchmarks. We also reinterpret spectral GNNs from a message-passing perspective, showing the equivalence between Hermitian Laplacian-based models and GraphSAGE with incidence normalization, and revealing that FaberNet’s strength largely arises from multi-scale feature integration. Together with these state-of-the-art results, our findings suggest that scale invariance may serve as a valuable principle for improving the performance of single-order GNNs. Code is available at \url{https://anonymous.4open.science/r/ScaleNet-2025/ }.
Primary Area: learning on graphs and other geometries & topologies
Submission Number: 2402
Loading