Satori: Reinforcement Learning with Chain-of-Action-Thought Enhances LLM Reasoning via Autoregressive Search

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Abstract: Large language models (LLMs) have demonstrated remarkable reasoning capabilities across diverse domains. Recent studies have shown that increasing test-time computation enhances LLMs' reasoning capabilities. This typically involves extensive sampling at inference time guided by an external LLM verifier, resulting in a two-player system. Despite external guidance, the effectiveness of this system demonstrates the potential of a single LLM to tackle complex tasks. Thus, we pose a new research problem: *Can we internalize the searching capabilities to fundamentally enhance the reasoning abilities of a single LLM?* This work explores an orthogonal direction focusing on post-training LLMs for autoregressive searching (*i.e.,* an extended reasoning process with self-reflection and self-exploration of new strategies). To achieve this, we propose the Chain-of-Action-Thought (COAT) reasoning and a two-stage training paradigm: 1) a small-scale format tuning stage to internalize the COAT reasoning format and 2) a large-scale self-improvement stage leveraging reinforcement learning. Our approach results in Satori, a 7B LLM trained on open-source models and data. Extensive empirical evaluations demonstrate that Satori achieves state-of-the-art performance on mathematical reasoning benchmarks while exhibits strong generalization to out-of-domain tasks. Code, data, and models are fully open-sourced.
Lay Summary: Since the release of OpenAI's o1, significant efforts have been made within the research community to enhance open-source LLMs with advanced reasoning capabilities. This includes various approaches, including distillation using a strong teacher model, MCTS, and reward model guided search. This work aims to explore a new research direction: enabling LLMs with autoregressive search capabilities, i.e., a single LLM performs an extended reasoning process with self-reflection and self-exploration of new strategies. To achieve this, we develop a LLM post-training paradigm with several key concepts and ideas inspired by classical reinforcement learning (RL) communities. Our approach results in Satori, a 7B LLM trained on open-source model and data. Key features of Satori include: 1) Capable of self-reflection and self-exploration without external guidance; 2) Achieve state-of-the-art reasoning performance mainly through self-improvement (RL). 3) Exhibit transferability of reasoning capabilities on unseen domains beyond math.
Link To Code: https://github.com/satori-reasoning/Satori
Primary Area: Deep Learning->Large Language Models
Keywords: reasoning, self-improvement, large language model, post-training, reinforcement learning
Submission Number: 12174
Loading