Improving multi-hop question answering with prompting explicit and implicit knowledge aligned human reading comprehension
Abstract: Language models (LMs) utilize chain-of-thought (CoT) to imitate human reasoning and inference processes, achieving notable success in multi-hop question answering (QA). Despite this, a disparity remains between the reasoning capabilities of LMs and humans when addressing complex challenges. Psychological research highlights the crucial interplay between explicit content in texts and prior human knowledge during reading. However, current studies have inadequately addressed the relationship between input texts and the pre-training-derived knowledge of LMs from the standpoint of human cognition. In this paper, we propose a Prompting Explicit and Implicit knowledge (PEI) framework, which employs CoT prompt-based learning to bridge explicit and implicit knowledge, aligning with human reading comprehension for multi-hop QA. PEI leverages CoT prompts to elicit implicit knowledge from LMs within the input context, while integrating question type information to boost model performance. Moreover, we propose two training paradigms for PEI, and extend our framework on biomedical domain QA to further explore the fusion and relation of explicit and implicit biomedical knowledge via employing biomedical LMs in the Knowledge Prompter to invoke biomedical implicit knowledge and analyze the consistency of the domain knowledge fusion. The experimental results indicate that our proposed PEI performs comparably to the state-of-the-art on HotpotQA, and surpasses baselines on 2WikiMultihopQA and MuSiQue. Additionally, our method achieves a significant improvement compared to baselines on MEDHOP. Ablation studies further validate the efficacy of PEI framework in bridging and integrating explicit and implicit knowledge.
External IDs:dblp:journals/mlc/HuangLL25
Loading