Iterative Encode-and-Decode Graph Neural NetworkOpen Website

Published: 01 Jan 2023, Last Modified: 01 Mar 2024ADMA (4) 2023Readers: Everyone
Abstract: Graph neural networks (GNNs) have been extensively explored due to semi-supervised learning on graphs, which uses few labels to complete tasks without employing costly labeling information. Related methods are dedicated to mitigating the over-smoothing phenomenon to generate reliable node representations. However, existing methods lack correct guidance for neighbors and links from graph characteristics to node representations, resulting in incorrect neighbor information aggregation and poor representation discriminability. In this paper, we introduce a novel encoding and decoding framework that correctly leverages structure guided by labels and uses features for self-supervision of representations to alleviates the over-smoothing phenomenon, dubbed as Iterative Encode-and-Decode Graph Neural Network (IEDGNN). First, we offer a central component reconstruction module to correct the category centers of node representations, lowering the likelihood of aggregating neighbor information across categories. Then, we propose a feature self-reconstruction module that enables node representations to contain valid original attributes, making representations more informative in downstream classification tasks. We also theoretically analyze the impact of different encoder-decoder combinations on representation generation in our design. Extensive experiments demonstrate that our IEDGNN outperforms the state-of-the-art models on eight graph benchmark datasets with three label ratios.
0 Replies

Loading