CodeIO: Condensing Reasoning Patterns via Code Input-Output Prediction

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 oralEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: We teach the models to predict code inputs and outputs to improve their general reasoning ability.
Abstract: Reasoning is a fundamental capability of Large Language Models. While prior research predominantly focuses on enhancing narrow skills like math or code generation, improving performance on many other reasoning tasks remains challenging due to sparse and fragmented training data. To address this issue, we propose CodeI/O, a novel approach that systematically condenses diverse reasoning patterns inherently embedded in contextually-grounded codes, through transforming the original code into a code input-output prediction format. By training models to predict inputs/outputs given code and test cases entirely in natural language as Chain-of-Thought (CoT) rationales, we expose them to universal reasoning primitives—like logic flow planning, state-space searching, decision tree traversal, and modular decomposition—while decoupling structured reasoning from code-specific syntax and preserving procedural rigor. Experimental results demonstrate CodeI/O leads to consistent improvements across symbolic, scientific, logic, math & numerical, and commonsense reasoning tasks. By matching the existing ground-truth outputs or re-executing the code with predicted inputs, we can verify each prediction and further enhance the CoTs through multi-turn revision, resulting in CodeI/O++ and achieving higher performance. Our data and models will be publicly available.
Lay Summary: Improving how AI models reason generally—rather than just in specific areas like math or coding—is crucial for creating truly intelligent systems. Until now, most efforts to enhance reasoning have been narrowly focused. Our solution is remarkably simple: we teach models to predict inputs and outputs for a wide variety of existing code functions, but entirely in the form of natural language. This approach effectively captures fundamental reasoning patterns buried in codes. The result is a significant boost in performance across diverse reasoning tasks, bringing us closer to AI systems that can think robustly in all domains.
Link To Code: https://github.com/hkust-nlp/CodeIO
Primary Area: Deep Learning->Large Language Models
Keywords: Large Language Models, Reasoning, Code Execution
Submission Number: 9627
Loading