Towards the Universal Learning Principle for Graph Neural Networks

16 Sept 2023 (modified: 11 Feb 2024)Submitted to ICLR 2024EveryoneRevisionsBibTeX
Supplementary Material: zip
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Keywords: Graph Neural Network, Graph Filter, Learning Principle
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Abstract: Graph neural networks (GNNs) are currently highly regarded in graph representation learning tasks due to their significant performance. Although various propagation mechanisms and graph filters were proposed, few works have considered the convergence and stability of graph filters under infinite-depth scenarios. To address this problem, we elucidate the criterion for the graph filter formed by power series and further establish a scalable regularized learning principle, which can guide us on how to design infinite deep GNN. Following the framework, we develop Adaptive Power GNN (APGNN), a deep GNN that employs exponentially decaying weights to aggregate graph information of different orders so as to mine the deeper neighbor information. Different from existing GNNs, APGNN can be seamlessly extended to an infinite-depth network. Moreover, we analyze the generalization of the proposed learning framework via uniform convergence and present its upper bound in theory. Experimental results show that APGNN obtains superior performance against the state-of-the-art GNNs.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 562
Loading