Keywords: knowledge base QA, Question Answering
Abstract: Large language models (LLMs) have demonstrated increasingly strong reasoning capabilities, achieving remarkable progress in knowledge graph question answering (KGQA). However, a key challenge in such systems is non-deterministic reasoning, where the model indecisively activates multiple semantically related knowledge graph edges for a given query, frequently leading to incorrect answers. To address this issue, we propose Diagnosing and Remedying Representation Deficiencies for Deterministic Reasoning in KGQA (DR2). DR2 identifies and localizes non-deterministic reasoning behaviors, uncovering the underlying semantic representation deficiencies in LLMs. Building on this diagnosis, we design abductive reasoning-based preference learning, which promotes fine-grained semantic discrimination and mitigates non-deterministic reasoning errors. Experimental results demonstrate that the proposed DR2 significantly outperforms several strong baselines, achieving state-of-the-art performance on the widely used WebQSP and CWQ benchmarks.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: knowledge base QA, Question Answering
Contribution Types: NLP engineering experiment
Languages Studied: English
Submission Number: 9355
Loading