Rethinking Smoothness in Node Features Learned by Graph Convolutional Networks

TMLR Paper6922 Authors

08 Jan 2026 (modified: 12 Feb 2026)Withdrawn by AuthorsEveryoneRevisionsBibTeXCC BY 4.0
Abstract: The pioneering works of Oono and Suzuki (ICLR 2020) and Cai and Wang (arXiv:2006.13318) initiated the analysis of feature smoothness in graph convolutional networks (GCNs), uncovering a strong empirical connection between node classification accuracy and the ratio of smooth to non-smooth feature components. However, it remains unclear how to effectively control this ratio in learned node features to enhance classification performance. Furthermore, deep GCNs with ReLU or leaky ReLU activations tend to suppress non-smooth feature components. In this paper, we introduce a novel strategy to enable GCNs to learn node features with {\bf controllable smoothness}, thereby improving node classification accuracy. Our method comprises three core components: (1) deriving a geometric relationship between the inputs and outputs of ReLU and leaky ReLU activations; (2) augmenting the standard message-passing mechanism in graph convolutional layers with a learnable term for efficient smoothness modulation; and (3) theoretically analyzing the attainable smooth-to-non-smooth ratios under the proposed augmented propagation. Extensive experiments demonstrate that our approach substantially enhances node classification performance across GCNs and related architectures.
Submission Type: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Moshe_Eliasof1
Submission Number: 6922
Loading