Abstract: To avoid the over-fitting phenomenon that appeared in performing Graph Neural Networks (GNNs) on test examples, the feature encoding scheme of such GNNs usually introduces the dropout procedure. However, after learning latent node representations under this scheme, Gaussian noise produced by the dropout operation is inevitably transmitted into the next neighborhood aggregation step, which necessarily hampers the unbiased aggregation ability of GNN models. To address this issue, we present a novel aggregator, De-Noising Aggregation (DNAG), which utilizes Principal Component Analysis (PCA) to preserve the aggregated real signals from neighboring features and simultaneously filter out the Gaussian noise. Different from using PCA on traditional applications to reduce the feature dimension, we regard PCA as an aggregator to compress the neighboring node features to enjoy better expressive de-noising power. We further propose new training architectures to simplify the intensive computation of PCA in DNAG. Extensive experiments show the apparent superiority of the proposed DNAG models in gaining more de-noising capability and achieving the state-of-the-art for a set of predictive tasks on several graph-structured datasets.
0 Replies
Loading