OptimAI: Optimization from Natural Language Using LLM-Powered AI Agents

ICLR 2026 Conference Submission15716 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Optimization, Large Language Models, AI Agents
TL;DR: We present OptimAI, a multi-agent LLM framework that translates natural language optimization problems into executable code, achieving state-of-the-art results and demonstrating synergistic gains from heterogeneous model collaboration.
Abstract: Optimization plays a vital role in scientific research and practical applications. However, formulating a concrete optimization problem described in natural language into a mathematical form and selecting a suitable solver to solve the problem requires substantial domain expertise. We introduce OptimAI, a framework for solving Optimization problems described in natural language by leveraging LLM-powered AI agents, and achieve superior performance over current state-of-the-art methods. Our framework is built upon the following key roles: (1) a formulator that translates natural language problem descriptions into precise mathematical formulations; (2) a planner that constructs a high-level solution strategy prior to execution; and (3) a coder and a code critic capable of interacting with the environment and reflecting on outcomes to refine future actions. Ablation studies confirm that all roles are essential; removing the planner or code critic results in $5.8\times$ and $3.1\times$ drops in productivity, respectively. Furthermore, we introduce UCB-based debug scheduling to dynamically switch between alternative plans, yielding an additional $3.3\times$ productivity gain. Our design emphasizes multi-agent collaboration, and our experiments confirm that combining diverse models leads to performance gains. Our approach attains 88.1\% accuracy on the NLP4LP dataset and 82.3\% on the Optibench dataset, reducing error rates by 58\% and 52\%, respectively, over prior best results.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 15716
Loading