Abstract: Accurate confidence estimates are crucial for safe graph neural network (GNN) deployment, yet link prediction (LP) calibration is understudied. We provide novel insights into LP calibration by highlighting the importance of meaningful node-level uncertainties. In response, we propose E-ΔUQ, an architecture-agnostic framework leveraging stochastic centering to incorporate epistemic uncertainty into GNNs. Our work provides principles and three E-ΔUQ variants to improve trust in LP models, while introducing minimal overhead. Key results demonstrate properly handling node-level uncertainty improves edge calibration. We evaluate E-ΔUQ variants on citation networks and find that intermediate stochastic layers outperform alternatives by producing better node uncertainties. E-ΔUQ reduces calibration error by 15-50% and maintains comparable prediction fidelity.
Loading