RoRecomp: Enhancing Reasoning Efficiency via Rollout Response Recomposition in Reinforcement Learning
Keywords: large language model, reasoning model, reinforcement learning, efficient AI
TL;DR: RoRecomp compresses LLM reasoning by strategically recomposing training data, achieving significant length reduction without performance loss.
Abstract: Reinforcement learning with verifiable rewards (RLVR) has proven effective in eliciting complex reasoning in large language models (LLMs). However, standard RLVR training often leads to excessively verbose processes (in reasoning tasks) and inefficient exploration trajectories (in agentic settings), as outcome-only rewards provide no incentive for efficiency and the high variance in response length within relatively small rollout groups results in noisy optimization signals.
To address this, we propose Rollout Response Recomposition (RoRecomp), a plug-and-play method that guides models toward concise reasoning by strategically recomposing the training data.
RoRecomp separates responses into two distinct batch types: 1) priority batches, which combine the short-correct and long-incorrect responses selected from online batches to provide a clear gradient signal for brevity, and 2) compensation batches, which utilize the remaining responses stored in a replay buffer to maintain training stability and prevent model collapse.
To comprehensively evaluate effectiveness, we test RoRecomp across three settings where results demonstrate substantial efficiency gains: reducing reasoning length by 27.7\% in zero RL training, reducing unnecessary tool calls by 46.8\% while improving accuracy in agentic RL,
and achieving up to 52.5\% length reduction in thinking compression, all with minimal performance impact.
Supplementary Material: zip
Primary Area: applications to computer vision, audio, language, and other modalities
Submission Number: 15473
Loading