Evaluation of Large Language Models as Solution Generators in Complex Optimization

Published: 2025, Last Modified: 08 Jan 2026IEEE Comput. Intell. Mag. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Large language models (LLMs) have demonstrated exceptional performance not only in natural language processing tasks but also in a great variety of non-linguistic domains. In diverse optimization scenarios, there is also a rising trend of applying LLMs. However, whether the application of LLMs in complex numerical optimization problems is genuinely beneficial remains unexplored. This paper aims to provide a comprehensive evaluation of LLMs in optimization, encompassing both discrete and continuous optimization problems to evaluate their effectiveness and unique contributions in this field. Our findings reveal the limitations and future possibilities of LLMs in optimization. Specifically, despite their significant computational power, LLMs still significantly underperform in numerical optimization tasks, largely due to a mismatch between the problem domain and their processing capabilities. However, while LLMs may not be optimal for traditional numerical optimization, their broader potential remains promising. Notably, LLMs can assist numerical optimizers by reducing reliance on domain-specific knowledge through prompt processing and by solving problems in non-numerical domains. To the best of our knowledge, this work presents the first systematic evaluation of LLMs for numerical optimization. Our findings pave the way for a deeper understanding of LLMs’ role in optimization and guide future application of LLMs in a wide range of scenarios.
Loading