Node-wise Calibration of Graph Neural Networks under Out-of-Distribution Nodes via Reinforcement Learning

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: graph neural network, calibration, reinfocement learning
Abstract: Graph neural networks (GNNs) achieve great success in tasks like node classification, link prediction, and graph classification. The core of GNNs aims to obtain representative features by aggregating neighborhood node information through the message-passing mechanism. However, when the graph is mixed with out-of-distribution (OOD) nodes, existing methods generally fail to provide reliable confidence for in-distribution (ID) classification, due to the under-explored negative impact from the OOD nodes. Our studies suggest that the calibration issue of GNN with OOD nodes is more complicated than that without OOD nodes. In some datasets the predictions of GNN are under-confident issue while others may be over-confident. This irregularity makes the current calibration methods less effective since none of them considers the negative impact from OOD nodes. Inspired by the existing work that calibrates the neural network with new loss functions that aim to adjust the entropy of the output implicitly, we aim to achieve the same goal by adjusting the weight of the edges. Our empirical studies suggest that manually lowering the weight of edges connecting ID nodes and OOD nodes could effectively mitigate the calibration issue. However, identification of these edges and determination of their weights remains challenging since the OOD nodes are unknown to the training process. To tackle the above challenge, we propose a novel framework called \underline{R}L-enhanced \underline{N}ode-wise \underline{G}raph \underline{E}dge \underline{R}e-weighting (RNGER) to calibrate GNNs against OOD nodes. The proposed RNGER framework explores how the entropy of the target nodes is affected by the adjustment of the edge weights without the need for identifying OOD nodes. We develop the iterative edge sampling and re-weighting method accordingly and formulate it as the Markov Decision Process. With the reinforcement learning method, we could achieve the optimal graph structure to alleviate the calibration issue of GNNs. Experimental results on benchmark datasets demonstrate that our method can significantly reduce the expected calibration error (ECE) and also show comparable accuracy, compared with strong baselines and other state-of-the-art methods.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 6276
Loading