Keywords: LLM Reasoning, tree, reinforcement learning
TL;DR: The proposed TreePO organizing generation into a tree structure. It allows for sharing common reasoning steps, which makes the process more stable and faster.
Abstract: Recent advancements in aligning large language models via reinforcement learning have achieved remarkable gains in solving complex reasoning problems, but at the cost of expensive on-policy rollouts and limited exploration of diverse reasoning paths.
In this work, we introduce TreePO, involving a self-guided rollout algorithm that views sequence generation as a tree-structured searching process.
Composed of dynamic tree sampling policy and fixed-length segment decoding, TreePO leverages local uncertainty to warrant additional branches.
By amortizing computation across common prefixes and pruning low-value paths early, TreePO essentially reduces the per-update compute burden while preserving or enhancing exploration diversity.
Key contributions include: (1) a segment-wise sampling algorithm that alleviates the KV cache burden through contiguous segments and spawns new branches along with an early-stop mechanism; (2) a tree-based segment-level advantage estimation that considers both global and local proximal policy optimization.
and (3) analysis on the effectiveness of probability and quality-driven dynamic divergence and fallback strategy.
We empirically validate the performance gain of \modelname on a set reasoning benchmarks and the efficiency saving of GPU hours from 22% up to 43% of the sampling design for the trained models, meanwhile showing up to 40% reduction at trajectory-level and 35% at token-level sampling compute for the existing models.
While offering a free lunch of inference efficiency, TreePO reveals a practical path toward scaling RL-based post-training with fewer samples and less compute.
Primary Area: reinforcement learning
Submission Number: 18882
Loading