GAMMA: Gated Multi-hop Message Passing for Homophily-Agnostic Node Representation in GNNs

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: GNNs, Semi-supervised Learning, Message Passing, Heterophlic GNNs
TL;DR: A Multi-hop Message Passing method for Homophily-Agnostic Node Representation in GNNs.
Abstract: The success of Graph Neural Networks (GNNs) leverages the homophily principle, where connected nodes share similar features and labels. However, this assumption breaks down in heterophilic graphs, where same-class nodes are often distributed across distant neighborhoods rather than immediate connections. Recent attempts expand the receptive field through multi-hop aggregation schemes that explicitly preserve intermediate representations from each hop distance. While effective at capturing heterophilic patterns, these methods require separate weight matrices per hop and feature concatenation, causing parameters to scale linearly with hop count. This leads to high computational complexity and GPU memory consumption. We propose Gated Multi-hop Message Passing (GAMMA), where nodes assess how relevant the aggregated information is from their k-hop neighbors. This assessment occurs through multiple refinement steps where the node compares each hop's embedding with its current representation, allowing it to focus on the most informative hops. During the forward pass, GAMMA finds the optimal mix of multi-hop information local to each node using a single feature vector without needing separate representations for each hop, thereby maintaining dimensionality comparable to single hop GNNs. In addition, we propose a weight sharing scheme that leverages a unified transformation for aggregated features from multiple hops so the global heterophilic patterns specific to each hop are learned during training. As such, GAMMA captures both global (per-hop) and local (per-node) heterophily patterns without high computation and memory overhead. Experiments show GAMMA matches or exceeds state-of-the-art heterophilic GNN accuracy, achieving up to $\approx20\times$ faster inference. Our code is publicly available at \url{https://github.com/amir-ghz/GAMMA}.
Primary Area: Deep learning (e.g., architectures, generative models, optimization for deep networks, foundation models, LLMs)
Submission Number: 22213
Loading