On the Expressive Equivalence Between Graph Convolution and Attention ModelsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Graph neural networks (GNNs) have achieved remarkable successes in various graph tasks, and recent years have witnessed a flourishing growth in research regarding GNNs' expressive power. The number of linear regions generated from GNNs is a recently considered metric that quantifies GNNs' capacity. The estimate of the number of linear regions has been previously developed for deep and convolution neural networks (DNN and CNN). In this paper, we compare the expressive power of the classic graph convolution network (GCN) and attention based models in terms of their capability to generate linear regions. We show that the prediction advantage of attention models can be matched or even surpassed by enhancing GCN with refined graph Ricci curvature resulting the so-called high rank graph convolution network (HRGCN). Thus, the two models are equivalent to each other in terms of expressive power. Experimental results show that the proposed HRGCN model outperforms the state-of-the-art results in various classification and prediction tasks.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
14 Replies

Loading