Towards the Universal Learning Principle for Graph Neural Networks

11 May 2023 (modified: 12 Dec 2023)Submitted to NeurIPS 2023EveryoneRevisionsBibTeX
Keywords: Graph Neural Network, Graph Filter, Learning Framework
Abstract: Graph neural networks (GNNs) are currently highly regarded in graph representation learning tasks due to their significant performance. Although various propagation mechanisms and graph filters were proposed, few works have investigated their rationale from the perspective of learning. In this paper, we elucidate the criterion for the graph filter formed by power series, and further establish a scalable regularized learning framework that theoretically realizes GNN with infinite depth. Following the framework, we introduce Adaptive Power GNN (APGNN), a deep GNN that employs exponentially decaying weights to aggregate graph information of varying orders, thus facilitating more effective mining of deeper neighbor information. Moreover, the multiple $P$-hop message passing strategy is proposed to efficiently perceive the higher-order neighborhoods. Different from other GNNs, the proposed APGNN can be seamlessly extended to an infinite-depth network. To clarify the learning guarantee, we theoretically analyze the generalization of the proposed learning framework via uniform convergence. Experimental results show that APGNN obtains superior performance compared to state-of-the-art GNNs, highlighting the effectiveness of our framework.
Supplementary Material: pdf
Submission Number: 12187
Loading