REHKS-QA:Reflection-Enhanced Complex Question Answering for LLMs with Heterogeneous Knowledge Sources

ACL ARR 2025 February Submission6261 Authors

16 Feb 2025 (modified: 09 May 2025)ACL ARR 2025 February SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large Language Models (LLMs) still struggle with knowledge-intensive complex question answering, which requires reasoning over multiple knowledge facts. Existing approaches commonly use question decomposition with retrieval-augmented generation, where LLM first decomposes a complex question into sub-questions and then retrieves relevant information from external knowledge sources for sequential answering. Nevertheless, such methods suffer from error propagation, primarily due to negative retrieval, where irrelevant or missing knowledge misleads the LLMs' responses. To address these challenges, we propose REHKS-QA(Reflection-Enhanced Complex Question Answering for LLMs with Heterogeneous Knowledge Sources), a novel framework that integrates unstructured knowledge, structured knowledge and LLMs' parametric knowledge through a stepwise reflection mechanism. Specifically, REHKS-QA first decomposes complex questions into sub-questions, retrieves relevant external knowledge from both structured and unstructured sources, and generates preliminary answers. To mitigate misleading information, LLMs then explicitly reflect on the faithfulness of each answer by identifying supporting evidence. If no valid evidence is found, LLMs either revise their responses or use their parametric knowledge. Experimental results on two CQA benchmarks demonstrate that REHKS-QA not only outperforms state-of-the-art methods but also improves the explainability and verifiability of answers.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: multihop QA;open-domain QA
Languages Studied: English
Submission Number: 6261
Loading