everyone
since 09 May 2025">EveryoneRevisionsBibTeXCC BY 4.0
Large Language Models (LLMs) exhibit limitations in complex, multi-step reasoning tasks. This paper introduces a framework that enhances LLM problem-solving by incorporating explicit planning via a modified Monte Carlo Tree Search (HierarchicalMCTS). Our approach decouples planning from execution, using modified MCTS to hierarchically search the space of complete reasoning plans, guided by evaluation agents that assess logical consistency and feasibility. We also explore the use of smaller LLMs for planning and larger ones for execution to improve efficiency. Experiments on six reasoning benchmarks show that HierarchialMCTS planning significantly improves accuracy, achieving a 24.18% average improvement over zero-shot Chain-of-Thought methods. Notably, the smaller-larger LLM configuration maintains 90.70% of the full performance while reducing computational cost by 73%. These findings highlight the importance of explicit, search-based planning for LLMs and suggest a path towards more robust and efficient reasoning systems for complex problem-solving. Codes are anonymously available at \url{https://anonymous.4open.science/r/HierarchicalMCTS-9C0D}.