PR-KGC: Text-enhanced Knowledge Graph Completion with Pair-wise Re-ranking

Published: 2025, Last Modified: 23 Jan 2026ICASSP 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recent advancements in Knowledge Graph Completion (KGC) often adopt a two-stage pipeline that combines triple-based retrieval with text-based re-ranking. However, point-wise re-rankers, which score candidates individually, often fail to capture subtle distinctions between similar candidates due to the lack of direct comparisons. While list-wise re-rankers address this by evaluating all candidates simultaneously, generating permutations of all candidates is computationally challenging for pre-trained language models (PLMs) and can result in issues such as omissions, rejections, and especially inconsistencies in the output. To address these challenges, this paper introduces a Pairwise Re-ranking method for Knowledge Graph Completion (PRKGC), which mitigates the complexities of point-wise calibrated scoring and list-wise permutation outputs. It reduces the burden on PLMs by requiring them to perform nuanced comparisons between pairs of candidates, rather than all candidates at once. During inference, our approach processes all possible permutations of the top k candidate pairs, ensuring a thorough evaluation and consistency in ranking. Extensive experiments on link prediction tasks demonstrate that the proposed strategy effectively elevates much smaller PLMs (∼100M parameters) to achieve state-of-the-art performance, outperforming models based on 3x RoBERTa-Large and 70x LLaMA2-7B. Additionally, case studies reveal that these improvements stem from the model’s enhanced ability to discern subtle differences between similar candidates. Under nearly identical performance in Hits@3, it outperforms the most competitive baselines by approximately 1.6-3.8% in Hits@1.
Loading