R1-Code-Interpreter: LLMs Reason with Code via Supervised and Multi-stage Reinforcement Learning

ICLR 2026 Conference Submission14477 Authors

18 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Large Language Models, Code Interpreter, Reinforcement Learning, Curriculum Learning, Symbolic Reasoning, Tool Use
TL;DR: We present R1-Code-Interpreter, an LLM trained with supervised and multi-stage RL to integrate code execution with reasoning, achieving strong performance across 144 diverse tasks and outperforming GPT-4o with Code Interpreter.
Abstract: Practical guidance on training Large Language Models (LLMs) to leverage Code Interpreter across diverse tasks remains lacking. We present R1-Code-Interpreter, an extension of a text-only LLM trained via multi-turn supervised fine-tuning (SFT) and reinforcement learning (RL) to autonomously generate multiple code queries during step-by-step reasoning. Unlike prior RL + tool-use efforts focused on narrow domains such as math or retrieval, we curate 144 diverse reasoning and planning tasks and show that training a general-purpose Code Interpreter across them presents significant challenges due to task heterogeneity and scarcity of effective samples. To address this, we introduce a multi-stage curriculum learning approach that partitions training samples by measured improvement potential. The RL training prioritizes samples with higher potential and gradually shifts to lower-potential ones, increasing the average RL gains from merely +3.4\% to +9.3\% across Qwen-2.5 models (3/7/14B). Our final model, R1-CI-14B, improves average accuracy on the 37 test tasks from 44.1\% to 72.4\%, outperforming text-only GPT-4o (58.6\%) and GPT-4o with Code Interpreter (70.9\%). Notably, R1-CI-14B also exhibits emergent self-checking behavior through code generation.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 14477
Loading