Find tailored step example for next step: A Targeted Step-wise Retrieval Framework for Guiding LLM Reasoning
Keywords: in-context learning, prompting, retrieval, large language model, NLP
TL;DR: We introduce TSS, a framework that enhances LLM reasoning by seeking targeted, step-level guidance.
Abstract: Large language models (LLMs) have shown strong performance in mathematical reasoning, supported by approaches such as In-Context Learning (ICL) and Retrieval-Augmented Generation (RAG). However, existing methods often provide entire problems as examples, which is too coarse-grained for multi-step reasoning. Many steps in a retrieved problem may not align with the reasoning trajectory of the target problem, and some may even mislead the inference process. To address this limitation, we propose TSS (Tailored Step Search), a framework that enhances LLM reasoning through targeted step-level retrieval. TSS enables a model to dynamically decide when to retrieve a single logically consistent next step, using the current problem and its intermediate state as the query. The framework consists of two main components. First, we design structured training data and develop a Step Retriever, trained with a contrastive learning strategy to capture the logical flow between consecutive steps. Second, we train a Generator with a two-phase curriculum: it first learns to predict whether retrieval is necessary, and then learns to generate step-by-step reasoning in a structured format. Experiments on four mathematical reasoning datasets, across four backbone LLMs and multiple few-shot settings, demonstrate that TSS substantially improves reasoning accuracy and reliability.
Our code is available at https://anonymous.4open.science/r/TSS-D930/.
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 23891
Loading