A Deep Graph Neural Networks Architecture Design: From Global Pyramid-like Shrinkage Skeleton to Local Link RewiringDownload PDF

28 Sept 2020 (modified: 05 May 2023)ICLR 2021 Conference Withdrawn SubmissionReaders: Everyone
Keywords: graph neural networks, architecture design, convergence, errorneous weight links
Abstract: Expressivity plays a fundamental role in evaluating deep neural networks, and it is closely related to understanding the limit of performance improvement. In this paper, we propose a three-pipeline training framework based on critical expressivity, including global model contraction, weight evolution, and link's weight rewiring. Specifically, we propose a pyramidal-like skeleton to overcome the saddle points that affect information transfer. Then we analyze the reason for the modularity (clustering) phenomenon in network topology and use it to rewire potential erroneous weighted links. We conduct numerical experiments on node classification and the results confirm that the proposed training framework leads to a significantly improved performance in terms of fast convergence and robustness to potential erroneous weighted links. The architecture design on GNNs, in turn, verifies the expressivity of GNNs from dynamics and topological space aspects and provides useful guidelines in designing more efficient neural networks. The code is available at https://github.com/xjglgjgl/SRGNN.
One-sentence Summary: This article mainly describes a GNN architecture design method involving global architecture for fast convergence and local link rewiring for erroneous inputs.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Reviewed Version (pdf): https://openreview.net/references/pdf?id=8s_Qq9jd74
4 Replies

Loading