PLAN-TUNING: Post-Training Language Models to Learn Step-by-Step Planning for Complex Problem Solving
Abstract: Recently, decomposing complex problems into simple subtasks--a crucial part of human-like natural planning--to solve the given problem has significantly boosted the performance of large language models (LLMs). However, leveraging such planning structures during post-training to boost the performance of smaller open-source LLMs remains underexplored. Motivated by this, we introduce PLAN-TUNING, a unified post-training framework that (i) distills synthetic task decompositions (termed “planning trajectories”) from large-scale LLMs and (ii) fine-tunes smaller models via supervised and reinforcement-learning objectives designed to mimic these planning processes to improve complex reasoning. On GSM8k and the MATH benchmarks, plan-tuned models outperform strong baselines by an average $\sim7$%. Furthermore, plan-tuned models show better generalization capabilities on out-of-domain datasets, with average $\sim10$% and $\sim12$% performance improvements on OlympiadBench and AIME 2024, respectively. Our detailed analysis demonstrates how planning trajectories improves complex reasoning capabilities, showing that PLAN-TUNING is an effective strategy for improving task-specific performance of smaller LLMs.
Paper Type: Long
Research Area: Language Modeling
Research Area Keywords: Natural Planning, Mathematical Reasoning, Complex Problem Solving, LLM Training, Post Training, Reinforcement Learning (RL)
Contribution Types: NLP engineering experiment, Data analysis
Languages Studied: English
Keywords: Natural Planning, Mathematical Reasoning, Complex Problem Solving, LLM Training, Post Training, Reinforcement Learning (RL)
Submission Number: 5467
Loading