Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models

Published: 11 Mar 2024, Last Modified: 15 Mar 2024LLMAgents @ ICLR 2024 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: language models, reasoning, decision-making, search
TL;DR: We propose a search algorithm for LM agents based on MCTS
Abstract: While language models (LMs) have shown potential on a range of decision-making tasks, their reliance on simple acting processes limits their broad deployment as autonomous agents. In this paper, we introduce Language Agent Tree Search (LATS) -- \emph{the first general} framework that \emph{synergizes} the capabilities of LMs in reasoning, acting, and planning. By leveraging the in-context learning ability of LMs, we integrate Monte Carlo tree search into LATS to enables LMs as agents, along with LM-powered value functions and self-reflections for cleverer exploration and thus enhanced decision-making. A key feature of our approach is the incorporation of an environment for external feedback, which offers a more deliberate and adaptive problem-solving mechanism that surpasses the constraints of existing techniques. Our experimental evaluation across diverse domains, including programming, interactive QA, web navigation, and math, validates the effectiveness and generality of LATS in decision-making while maintaining competitive or improved reasoning performance. Notably, LATS achieves state-of-the-art pass@1 accuracy (94.4%) for programming on HumanEval with GPT-4, and demonstrates gradient-free performance (average score of 75.9) comparable to gradient-based fine-tuning for web navigation on WebShop with GPT-3.5.
Submission Number: 65
Loading