Abstract: Knowledge Graph Completion (KGC) is crucial for addressing Knowledge Graph (KG) incompleteness, a key limitation for downstream applications.Existing KGC methods, including Large Language Model (LLM)-based approaches, often struggle with factual correctness and transparent reasoning for predictions.We introduce ReflectKGC, a novel, training-free Plan-Act-Judge agent framework designed to tackle these challenges. ReflectKGC employs LLMs across three stages for interpretable and accurate KGC: 1) Planning: An LLM profiles relations from example triples, inferring semantics and entity type constraints. 2) Acting: An Evaluator LLM assesses candidates, generating scores and human-readable rationales based on the profiled relation. 3) Judging: Critically, a Judge LLM scrutinizes the Evaluator's rationales, re-scoring or filtering candidates based on flawed reasoning, actively correcting predictions to enhance accuracy. This rationale-driven active correction enables ReflectKGC to deliver more accurate and trustworthy results. Experiments on standard benchmarks demonstrate ReflectKGC's state-of-the-art performance, yielding verifiable and accurate completions.
Paper Type: Long
Research Area: NLP Applications
Research Area Keywords: knowledge graphs
Contribution Types: Data analysis
Languages Studied: English
Submission Number: 5478
Loading