Abstract: Large language models (LLMs) are post-trained through reinforcement learning (RL) to evolve into Reasoning Language Models (RLMs), where the hallmark of this advanced reasoning is ``aha'' moments when they start to perform \textit{strategies}, such as self-reflection and deep thinking, within chain of thoughts (CoTs). Motivated by this, this paper proposes a novel reinforced strategy injection mechanism (rSIM), that enables any LLM to become an RLM by employing a small planner to guide the LLM's CoT through the adaptive injection of reasoning strategies. To achieve this, the planner (leader agent) is jointly trained with an LLM (follower agent) using multi-agent RL (MARL), based on a leader-follower framework and straightforward rule-based rewards. Experimental results show that rSIM enables Qwen2.5-0.5B to become an RLM and significantly outperform Qwen2.5-14B. Moreover, the planner is generalizable: it only needs to be trained once and can be applied as a plug-in to substantially improve the reasoning capabilities of existing LLMs. In addition, the planner supports continual learning across various tasks, allowing its planning abilities to gradually improve and generalize to a wider range of problems.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: Question Answering, Machine Learning for NLP, Language Modeling, Generation
Contribution Types: Model analysis & interpretability, NLP engineering experiment, Publicly available software and/or pre-trained models
Languages Studied: English
Reassignment Request Area Chair: This is not a resubmission
Reassignment Request Reviewers: This is not a resubmission
A1 Limitations Section: This paper has a limitations section.
A2 Potential Risks: Yes
A2 Elaboration: Section C of the Appendix examines the potential risks associated with the proposed method.
B Use Or Create Scientific Artifacts: Yes
B1 Cite Creators Of Artifacts: Yes
B1 Elaboration: Section 5 provides a discussion of the artifacts utilized in our work, including full citations for each.
B2 Discuss The License For Artifacts: Yes
B2 Elaboration: Section 5 discusses these artifacts, particularly the datasets, to confirm their public availability."
B3 Artifact Use Consistent With Intended Use: Yes
B3 Elaboration: Section 5 describes these artifacts, focusing on their sources, accessibility, and relevance to the study
B4 Data Contains Personally Identifying Info Or Offensive Content: No
B5 Documentation Of Artifacts: Yes
B5 Elaboration: Section 5 of the main and the section 1 of the appendix discuss the artifacts related to the experiments, including their sources, availability, and role in the evaluation.
B6 Statistics For Data: Yes
B6 Elaboration: Section 5 outlines the data sources and associated metadata.
C Computational Experiments: Yes
C1 Model Size And Budget: Yes
C1 Elaboration: Section 3 and Section 5 present the model sizes, while Section B of the Appendix provides details on the token costs.
C2 Experimental Setup And Hyperparameters: Yes
C2 Elaboration: Section 5.
C3 Descriptive Statistics: Yes
C3 Elaboration: Section 5.
C4 Parameters For Packages: Yes
C4 Elaboration: Section 5.
D Human Subjects Including Annotators: No
D1 Instructions Given To Participants: N/A
D2 Recruitment And Payment: N/A
D3 Data Consent: N/A
D4 Ethics Review Board Approval: N/A
D5 Characteristics Of Annotators: N/A
E Ai Assistants In Research Or Writing: Yes
E1 Information About Use Of Ai Assistants: Yes
E1 Elaboration: I used AI assistants only for writing refinement. For example, they helped identify and correct errors.
Author Submission Checklist: yes
Submission Number: 1399
Loading