Learning Graph Neural Networks with Noisy LabelsDownload PDF

Published: 17 Apr 2019, Last Modified: 23 Mar 2025LLD 2019Readers: Everyone
Keywords: weakly supervised learning, noisy label data, graph neural network, loss correction
TL;DR: We apply loss correction to graph neural networks to train a more robust to noise model.
Abstract: We study the robustness to symmetric label noise of GNNs training procedures. By combining the nonlinear neural message-passing models (e.g. Graph Isomorphism Networks, GraphSAGE, etc.) with loss correction methods, we present a noise-tolerant approach for the graph classification task. Our experiments show that test accuracy can be improved under the artificial symmetric noisy setting.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/learning-graph-neural-networks-with-noisy/code)
3 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview