KG-CF: Knowledge Graph Completion with Context Filtering under the Guidance of Large Language Models

ACL ARR 2024 June Submission3703 Authors

16 Jun 2024 (modified: 17 Jul 2024)ACL ARR 2024 June SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Recent years have witnessed the unprecedented performance of Large Language Models (LLMs) in various downstream tasks, where knowledge graph completion stands as a representative example. Nevertheless, despite the emerging explorations of utilizing LLMs for knowledge graph completion, most LLMs pose challenges in quantitative triplet score generation. This disadvantage fundamentally conflicts with the inherently ranking-based nature of the knowledge graph completion task and its associated evaluation protocols. In this paper, we propose a novel framework KG-CF for knowledge graph completion. In particular, KG-CF not only harnesses the exceptional reasoning capabilities of LLMs through context filtering but also aligns with ranking-based knowledge graph completion tasks and the associated evaluation protocols. Empirical evaluations on real-world datasets validate the superiority of KG-CF in knowledge graph completion tasks.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: Knowledge Graph Completion, Large Language Models, Pretrained Language model
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Data analysis
Languages Studied: English
Submission Number: 3703
Loading