Finetuned Large Language Models as Decomposers: Step-by-Step Reasoning for Knowledge Base Question Answering

ACL ARR 2025 February Submission588 Authors

09 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: As semantic parsing and complex reasoning are fundamental to tackling complex question answering over knowledge bases (KBQA), the growing trend is to leverage large language models (LLMs), which exhibit outstanding semantic understanding and logical reasoning abilities, for this task. However, most of the existing LLM-based KBQA systems still operate as black boxes, unable to provide explanations for the derived results, motivating us to develop an interpretable and trustworthy KBQA system. In this paper, we innovatively introduce question templates as intermediary outcomes for the logical reasoning of LLMs to make the multi-step reasoning process of the existing KBQA system interpretable. Specifically, our method, named \emph{Keqing}, first decomposes complex questions into simpler sub-questions according to predefined question templates using LLMs, and then addresses each sub-question by retrieving relevant information from knowledge bases or performing logical reasoning to achieve the final answer. To make \emph{Keqing} more practical and trustworthy, we develop an automatic pipeline for question template construction to scale up the number of question templates at a low cost, and also incorporate the uncertainty estimation technique to provide confidence levels for the reasoning answers. Extensive experiments demonstrate that \emph{Keqing} can achieve comparable performance to previous state-of-the-art methods and has better interpretability by rendering a step-by-step reasoning process.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: LLM, KBQA
Contribution Types: Model analysis & interpretability
Languages Studied: English
Submission Number: 588
Loading