Learning to Control the Smoothness of GCN Features

13 May 2024 (modified: 06 Nov 2024)Submitted to NeurIPS 2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: graph neural networks, activation function, smoothness of node features
TL;DR: We develop a new theoretically principled approach to let GCN learn node features with a desired smoothness to improve node classification accuracy.
Abstract: The pioneering work of Oono \& Suzuki [ICLR, 2020] and Cai \& Wang [arXiv:2006.13318] analyze the smoothness of graph convolutional network (GCN) features. Their results reveal an intricate empirical correlation between node classification accuracy and the ratio of smooth to non-smooth feature components. However, the optimal ratio that favors node classification is unknown, and the non-smooth features of deep GCN with ReLU or leaky ReLU activation function diminish. In this paper, we propose a new strategy to let GCN learn node features with a desired smoothness to enhance node classification. Our approach has three key steps: (1) We establish a geometric relationship between the input and output of ReLU or leaky ReLU. (2) Building on our geometric insights, we augment the message-passing process of graph convolutional layers (GCLs) with a learnable term to modulate the smoothness of node features with computational efficiency. (3) We investigate the achievable ratio between smooth and non-smooth feature components for GCNs with the augmented message passing scheme. Our extensive numerical results show that the augmented message passing remarkably improves node classification for GCN and some related models.
Supplementary Material: zip
Primary Area: Graph neural networks
Submission Number: 7099
Loading