KGE Calibrator+: An Efficient Probability Calibration Method of Knowledge Graph Embedding Models for Trustworthy Link Prediction

ACL ARR 2025 February Submission8309 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Knowledge graph embedding (KGE) models are designed for the task of link prediction, which aims to infer missing triples by learning accurate representations for entities and relations within a knowledge graph. However, existing KGE research largely overlooks the issue of probability calibration, leading to uncalibrated probability estimates that fail to reflect the true correctness of predicted triples, potentially resulting in erroneous decisions. Moreover, existing calibration methods are not well-suited for KGE models, and no dedicated probability calibration method has been specifically designed for them. In this paper, we propose KGE Calibrator+, the first probability calibration method tailored for KGE models to enhance the trustworthiness of their predictions. To achieve this, we introduce Jump Selection Strategy, which selects the most informative data while filtering out less significant data, and Multi-Binning Scaling, which models different probability levels separately to enhance model capacity and flexibility. Furthermore, we propose a Wasserstein distance-based loss function, improving both calibration performance and optimization stability. Extensive experiments across multiple data sets demonstrate that KGE Calibrator+ consistently outperforms existing calibration methods in terms of both effectiveness and efficiency, making it a promising solution for probability calibration in KGE models.
Paper Type: Long
Research Area: Efficient/Low-Resource Methods for NLP
Research Area Keywords: Probability Calibration; Knowledge Graph Embedding Model
Contribution Types: Approaches low compute settings-efficiency
Languages Studied: English
Submission Number: 8309
Loading