Keywords: Reinforcement Learning; LLM Reasoning; Entropy
TL;DR: FR3E stabilizes RL for LLM reasoning by using entropy to identify uncertain steps and launching targeted explorations from those points, creating efficient feedback without dense supervision.
Abstract: Reinforcement Learning from Verifiable Rewards (RLVR) improves the reasoning abilities of Large Language Models (LLMs) but it struggles with unstable exploration. We propose FR3E (First Return, Entropy-Eliciting Explore), a structured exploration framework that identifies high-uncertainty decision points in reasoning trajectories and performs targeted rollouts to construct semantically grounded intermediate feedback. Our method provides targeted guidance without relying on dense supervision. Empirical results on mathematical reasoning benchmarks(AIME24) show that FR3E promotes more stable training, produces longer and more coherent responses, and increases the proportion of fully correct trajectories. These results highlight the framework's effectiveness in improving LLM reasoning through more robust and structured exploration.
Primary Area: reinforcement learning
Submission Number: 8527
Loading