GCL: Graph Calibration Loss for Trustworthy Graph Neural NetworkOpen Website

Published: 01 Jan 2022, Last Modified: 17 May 2023ACM Multimedia 2022Readers: Everyone
Abstract: Despite the great success of Graph Neural Networks (GNNs), the trustworthiness is still lack-explored. A very recent study suggests that GNNs are under-confident on the predictions which is opposite to deep neural networks. In this paper, we investigate why this is the case. We discover that the "shallow" network of GNNs is the central cause. To address this challenge, we propose a novel Graph Calibration Loss (GCL), the first end-to-end calibration method for GNNs, which reshapes the standard Cross Entropy loss and is encouraged to assign up-weights loss to high-confidence examples. Through empirical observation and theoretical justification, we discover the GCL's calibration mechanism is to add a minimal-entropy regulariser to KL-divergence to bring down the entropy of correctly classified samples. To evaluate the effectiveness of the GCL, we train several representative GNNs models which use the GCL as loss function on various citation networks datasets, and further apply the GCL to a self-training framework. Compared to the existed methods, the proposed method achieves state-of-the-art calibration performance on node classification task and even improves the standard classification accuracy in almost all cases.
0 Replies

Loading