Abstract: Large Language Models (LLMs) have demonstrated strong performance in handling complex tasks that require both extensive knowledge and reasoning abilities. However, the existing LLM inference pipeline operates as an opaque process without explicit separation between knowledge retrieval and reasoning steps, making the model’s decision-making process unclear and disorganized. Recent research has shown that this ambiguity will lead to issues such as knowledge forgetting, which significantly impact the reliability of LLMs. In this paper, we propose a novel language model inference paradigm that decomposes the complex inference process into two distinct and clear actions: \textbf{(1) memory recall}: which retrieves relevant knowledge in LLM, and \textbf{(2) reasoning}: which performs logical steps based on the recalled knowledge. To facilitate this decomposition, we introduce two special tokens \textbf{$\langle \text{memory} \rangle$} and \textbf{$\langle \text{reason} \rangle$}, guiding the model to distinguish between steps that require knowledge retrieval and those that involve reasoning. Our experiment results show that this decomposition not only improves LLMs' performance among utility benchmarks but also enhances interpretability during the inference process, enabling users to identify sources of error and refine model responses effectively.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: novel language model inference paradigm, interpretability
Contribution Types: Model analysis & interpretability, NLP engineering experiment
Languages Studied: English
Submission Number: 11
Loading