Pruning for GNNs: Lower Complexity with Comparable Expressiveness

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We prune redundant structures in MP, K-path and K-hop GNNs, achieving lower complexity while preserving expressive power.
Abstract: In recent years, the pursuit of higher expressive power in graph neural networks (GNNs) has often led to more complex aggregation mechanisms and deeper architectures. To address these issues, we have identified redundant structures in GNNs, and by pruning them, we propose Pruned MP-GNNs, K-Path GNNs, and K-Hop GNNs based on their original architectures. We show that 1) Although some structures are pruned in Pruned MP-GNNs and Pruned K-Path GNNs, their expressive power has not been compromised. 2) K-Hop MP-GNNs and their pruned architecture exhibit equivalent expressiveness on regular and strongly regular graphs. 3) The complexity of pruned K-Path GNNs and pruned K-Hop GNNs is lower than that of MP-GNNs, yet their expressive power is higher. Experimental results validate our refinements, demonstrating competitive performance across benchmark datasets with improved efficiency.
Lay Summary: Graph neural networks (GNNs) are powerful tools that help computers understand complex connections, like social networks or molecules. But to make them more accurate, researchers often add more layers and features — which also makes them slower and harder to train. In our work, we asked: can we make GNNs simpler without losing their ability to understand complex structures? We discovered that many parts of GNNs are redundant, and by carefully removing them, the expressive power of pruned GNN remains unchanged This makes GNNs more practical for real-world applications, especially where computing power is limited — like mobile devices, or analyzing very large graphs.
Primary Area: Deep Learning->Graph Neural Networks
Keywords: Graph Neural Network, Pruning, Expressiveness, Complexity
Submission Number: 8986
Loading