Wide Graph Neural NetworkDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Graph neural networks, represenation learning, dictionary learning
TL;DR: This paper proposes a unified view to understend GNNs, and it motivates a new model called wide graph neural network.
Abstract: Usually, graph neural networks (GNNs) suffer from several problems, e.g., over-smoothing (in the spatial domain), poor flexibility (in the spectral domain), and low performance on heterophily (in both domains). In this paper, we provide a new GNN framework, called Wide Graph Neural Networks (WGNN) to solve these problems. It is motivated by our proposed unified view of GNNs from the perspective of dictionary learning. In light of this view, we formulate the graph learning in GNNs as learning representations from the dictionaries, where the fixed graph information is regarded as the dictionary and the trainable parameters are representations. Then, the dictionaries of spatial GNNs encode the adjacency matrix multiplication, while spectral ones sum its polynomials. Differently, WGNN directly concatenates all polynomials as the dictionary, where each polynomial is a sub-dictionary. Beyond polynomials, WGNN allows sub-dictionaries with an arbitrary size, for instance, the principal components of the adjacency matrix. This wide concatenation structure enjoys the great capability of avoiding over-smoothing and promoting flexibility, while the supplement of principal components can significantly improve the representation of heterophilic graphs. We provide a detailed theoretical analysis and conduct extensive experiments on eight datasets to demonstrate the superiority of the proposed WGNN.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
10 Replies

Loading