How Powerful is Implicit Denoising in Graph Neural NetworksDownload PDF

Published: 22 Nov 2022, Last Modified: 22 Oct 2023NeurIPS 2022 GLFrontiers WorkshopReaders: Everyone
Keywords: GNN denoising, GNN theory
TL;DR: We theoretically analyze the denoising effect in graph neural networks
Abstract: Graph Neural Networks (GNNs), which aggregate features from neighbors, are widely used for processing graph-structured data due to their powerful representation learning capabilities. It is generally believed that GNNs can implicitly remove feature noises and thus still obtain generalizable models. This point of view motivates some existing works to derive GNN models from the graph signal denoising (GSD) problem. However, few works have rigorously analyzed the implicit denoising effect in graph neural networks. In this work, we conduct a comprehensive theoretical study and analyze when and why implicit denoising happens in GNNs. Our theoretical analysis suggests that the implicit denoising largely depends on the connectivity and size of the graph, as well as the GNN architectures. Moreover, extensive empirical evaluations verify our theoretical analyses and the effectiveness of GNNs in eliminating noise in the feature matrix compared with Multi-Layer Perceptron (MLP). Moreover, motivated by adversarial machine learning in improving the robustness of neural networks and the correlation of node features and graph structure in GSD, we propose the adversarial graph signal denoising (AGSD) problem. By solving such a problem, we derive a robust graph convolution, where the smoothness of the node representations and the implicit denoising effect can be enhanced.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 7 code implementations](https://www.catalyzex.com/paper/arxiv:2209.14514/code)
1 Reply

Loading