Learning Graph Neural Networks with Noisy LabelsDownload PDF

Published: 17 Apr 2019, Last Modified: 22 Oct 2023LLD 2019Readers: Everyone
Keywords: weakly supervised learning, noisy label data, graph neural network, loss correction
TL;DR: We apply loss correction to graph neural networks to train a more robust to noise model.
Abstract: We study the robustness to symmetric label noise of GNNs training procedures. By combining the nonlinear neural message-passing models (e.g. Graph Isomorphism Networks, GraphSAGE, etc.) with loss correction methods, we present a noise-tolerant approach for the graph classification task. Our experiments show that test accuracy can be improved under the artificial symmetric noisy setting.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:1905.01591/code)
3 Replies

Loading