Keywords: Graph Neural Networks, Graph Structure Learning
TL;DR: We rethink graph structure learning in GNNs and find it could be unnecessary, prompting a simple and effective design of the model structures of GNNs.
Abstract: To improve the performance of Graph Neural Networks (GNNs), Graph Structure Learning (GSL) has been extensively applied to reconstruct or refine original graph structures. While GSL is generally thought to improve GNN performance, it often leads to longer training times and more hyperparameter tuning. Besides, the distinctions among current GSL methods remain ambiguous from the perspective of GNN training, and there is a lack of theoretical analysis to quantify their effectiveness. Recent studies further suggest that GSL does not consistently outperform baseline GNNs under the same hyperparameter tuning. This motivates us to ask a critical question: *Is GSL really useful for improving GNN performance?* To address this question, we first propose a new GSL framework, which includes three steps: GSL bases (i.e. node representations used to construct new graphs) construction, new structure construction, and view fusion, to better understand GSL. Then, our empirical studies and theoretical analysis show that the mutual information (MI) between node representations and labels does not increase after applying graph convolution on GSL graphs that are constructed by similarity, indicating GSL could be unnecessary in most cases. Our experiments fairly reassess the performance of GSL and reveal that adding GSL to GNN baselines or removing GSL in state-of-the-art models has negligible impact on node classification accuracy. We also report that pretrained GSL bases, parameter separation, and early fusion are effective designs within GSL. Our findings challenge the necessity of complex GSL methods and underscore the value of simplicity in GNN design.
Supplementary Material: pdf
Primary Area: learning on graphs and other geometries & topologies
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2035
Loading