Keywords: weakly supervised learning, noisy label data, graph neural network, loss correction
TL;DR: We apply loss correction to graph neural networks to train a more robust to noise model.
Abstract: We study the robustness to symmetric label noise of GNNs training procedures. By combining the nonlinear neural message-passing models (e.g. Graph Isomorphism Networks, GraphSAGE, etc.) with loss correction methods, we present a noise-tolerant approach for the graph classification task. Our experiments show that test accuracy can be improved under the artificial symmetric noisy setting.
Community Implementations: [ 1 code implementation](https://www.catalyzex.com/paper/learning-graph-neural-networks-with-noisy/code)
3 Replies
Loading