On the Global and Local Calibration of Graph Neural Networks

Published: 13 Apr 2026, Last Modified: 13 Apr 2026Calibration for Modern AI @ AISTATS 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Graph neural networks, calibration
Abstract: Recent work on calibration of Graph Neural Networks (GNNs) has largely concluded that GNNs are miscalibrated and typically under-confident on standard node classification benchmarks, motivating the development of graph-specific calibration methods evaluated in terms of Expected Calibration Error (ECE) on citation datasets such as Cora, Citeseer, and Pubmed. We revisit these conclusions and show that much of the reported miscalibration is explained by hyperparameter choices rather than intrinsic limitations of GNN architectures. Properly tuned classical GNNs achieve comparable ECE with respect to existing calibration methods. We further provide the first study of local calibration in graph neural networks by computing Local Calibration Error (LCE) on graph data. In particular, we adapt LCE to the graph setting by defining locality through distances in the node embedding space learned by the GNN. While global calibration errors are small, we observe higher local miscalibration. As future direction, calibration of GNNs should be further studied locally and on larger graph benchmarks rather than relying solely on global metrics on small datasets.
Submission Number: 32
Loading