Mitigating Over-Squashing in Graph Neural Networks by Spectrum-Preserving Sparsification

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We propose a novel approach to mitigate over-squashing which generates rewired graphs that can preserve the spectra of the original graphs while improving connectivity.
Abstract: The message-passing paradigm of Graph Neural Networks often struggles with exchanging information across distant nodes typically due to structural bottlenecks in certain graph regions, a limitation known as over-squashing. To reduce such bottlenecks, graph rewiring, which modifies graph topology, has been widely used. However, existing graph rewiring techniques often overlook the need to preserve critical properties of the original graph, e.g., spectral properties. Moreover, many approaches rely on increasing edge count to improve connectivity, which introduces significant computational overhead and exacerbates the risk of over-smoothing. In this paper, we propose a novel graph-rewiring method that leverages spectral graph sparsification for mitigating over-squashing. Specifically, our method generates graphs with enhanced connectivity while maintaining sparsity and largely preserving the original graph spectrum, effectively balancing structural bottleneck reduction and graph property preservation. Experimental results validate the effectiveness of our approach, demonstrating its superiority over strong baseline methods in classification accuracy and retention of the Laplacian spectrum.
Lay Summary: Graph Neural Networks, a type of machine learning from network-like data (e.g., social networks or molecular structures), often struggle when information needs to cross large distances within the network. This "over-squashing" effect means important messages get diluted or lost, much like a whisper in a crowded room, hindering the GNN's ability to learn effectively. We developed a new technique that reorganizes the network’s connections to combat this. Our approach carefully enhances these long-range communication channels. Crucially, it does this while meticulously preserving the graph's essential inherent structure—its unique "spectral signature"—and ensuring the network remains "sparse" (not overly dense with connections), which keeps computations fast. Our research helps these GNNs learn more effectively from complex, interconnected data. By improving information flow without sacrificing the graph’s core properties or its computational efficiency, our method leads to significantly more accurate predictions in tasks like classifying different items within the network.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Neural Ntworks, Over-squashing, Graph sparsification
Submission Number: 3560
Loading