Improving Knowledge Base Question Answering via Retrieval Enhancement and Stepwise Reasoning

Published: 2025, Last Modified: 07 Jan 2026ICASSP 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: The large-scale knowledge base question-answering (KBQA) has become increasingly vital across various fields. In the era of large language models (LLMs), leveraging knowledge base retrieval combined with large models for knowledge reasoning has become the mainstream approach for KBQA. However, this method faces two primary challenges: (1) the high computational cost and low accuracy of similarity-based path retrieval, and (2) the relatively low accuracy of directly obtaining answers from large models. In this paper, we introduce a novel method of retrieval enhancement-stepwise reasoning (RESR), which transforms path retrieval into text semantic understanding to minimize unnecessary interference from path information in the reasoning process, guiding the LLM to generate interpretable reasoning paths rather than directly producing answers. Specifically, RESR fine-tunes the generative model through text semantic understanding to swiftly and accurately filter path information relevant to the query from a large-scale knowledge graph (KG). Additionally, we employ the Chain-of-Thought (CoT) method to guide LLMs in step-by-step reasoning, verifying the logical coherence of reasoning paths rather than directly deriving answers. Our proposed method achieves state-of-the-art (SOTA) performance on the WebQuestionsSP (WQSP) and ComplexWebQuestions (CWQ) benchmarks.
Loading