Keywords: LLM evaluation, LLM Agent, LLM Self-Optimization
Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities in reasoning and tool use. However, the fundamental cognitive faculties essential for problem-solving—perception, reasoning, and memory—remain the stable core of intelligence. Unlike memorizing specific patterns, humans succeed in novel environments by applying these intrinsic faculties to adapt and optimize. Yet, whether LLMs possess this essential capacity—namely, the ability to continuously refine solutions in response to dynamic environmental feedback—remains underexplored.
To address this challenge, we introduce \textbf{OPT-BENCH}, a benchmark for evaluating self-improvement capabilities in large-scale search spaces. By combining 20 machine learning tasks with 10 classic NP-hard problems, OPT-BENCH provides a rigorous setting to assess whether agents can adapt through intrinsic self-reflection rather than rote tool application.
We further propose \textbf{OPT-Agent}, a framework that emulates human-like cognitive adaptation. It operates via a general perception--memory--reasoning loop, iteratively refining solutions based on environmental feedback.
Through extensive experiments on 19 LLMs from 7 model families, including reasoning models, general models, and open-source models ranging from 3B to 235B parameters, we demonstrate stronger models are more effective at leveraging feedback signals for self-improvement. However, this upper-bound adaptability remains fundamentally constrained by the models' base capacity, and even the most advanced LLMs still fall short of human expert performance.
Paper Type: Long
Research Area: Question Answering
Research Area Keywords: AI / LLM Agents
Contribution Types: Model analysis & interpretability, Data resources
Languages Studied: English
Submission Number: 8312
Loading