Keywords: LLM, knowledge graph, question answering, internal knowledge
Abstract: Large Language Models (LLMs) exhibit exceptional capabilities in various natural language tasks but are constrained by static knowledge, potential hallucinations, and opaque reasoning processes. Integrating external Knowledge Graphs (KGs) has emerged as a promising solution. While agent-based paradigms enhance knowledge exploration by iteratively retrieving grounded facts from KGs, they often adopt a conservative KG-centric strategy that deliberately avoids using the LLM's internal knowledge—rendering them vulnerable to failures whenever missing links occur, a common challenge even in largely complete KGs.
We propose a KG–LLM collaborative framework that repositions the LLM’s knowledge as dynamic knowledge probes, generated via our Guidance Graph of Thought (GGoT) reasoning backbone from partially specified triples. These probes guide KG exploration, highlight potential incompleteness, and trigger trust-aware bridging with existence and necessity checks before integrating LLM-derived entities. Cross-triple constraint-based disambiguation then ensures consistency, using KG structure for credible nodes and LLM validation for low-confidence ones.
Extensive experiments across multiple benchmarks show that our framework consistently achieves superior performance over existing approaches, with ablation studies verifying the contribution and necessity of each component in our design.
Supplementary Material: zip
Primary Area: interpretability and explainable AI
Submission Number: 25141
Loading