Knowledge-Driven CoT: Exploring Faithful Reasoning in LLMs for Knowledge-intensive Question AnsweringDownload PDF

Anonymous

16 Dec 2023ACL ARR 2023 December Blind SubmissionReaders: Everyone
Abstract: Equipped with Chain-of-Thought (CoT), Large language models (LLMs) have shown impressive reasoning ability in various downstream tasks. However, suffering from hallucinations and the inability to access external knowledge, LLMs often come with incorrect or unfaithful reasoning, especially when solving knowledge-intensive tasks such as KBQA. To alleviate this issue, we propose a framework called Knowledge-Driven Chain-of-Thought (KD-CoT) to verify and modify reasoning traces in CoT via interaction with external knowledge, and thus overcome the hallucinations and error propagation. Concretely, we formulate the CoT rationale of LLMs into a structured multi-round QA format. In each round, a QA system retrieves external knowledge related to the sub-question and returns a more precise answer, and then LLMs generate subsequent reasoning steps based on the returned answer. Moreover, we construct a KBQA CoT collection, which can serve as In-Context Learning demonstrations, and be utilized as feedback augmentation to train a multi-hop question retriever. Extensive experiments on WebQSP and ComplexWebQuestion datasets demonstrate the effectiveness of the proposed KD-CoT, which outperforms the vanilla CoT ICL with 8.0 and 5.1. Furthermore, our proposed feedback-augmented retriever can retrieve more valuable knowledge in the multi-hop scenario, achieving significant improvement in Hit and Recall performance.
Paper Type: long
Research Area: Question Answering
Contribution Types: NLP engineering experiment
Languages Studied: English
0 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview