Keywords: Optimization, LLM, Dialectics, Evaluation
TL;DR: LLMs struggle with complex sequential optimization tasks—our ACE framework, inspired by Hegelian Dialectics, helps them perform better without retraining or fine-tuning.
Abstract: Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse domains, opening new possibilities for solving complex optimization problems. This paper investigates the potential of LLMs as end-to-end designers for tackling Sequential Optimization Problems (SOPs), a challenging and pervasive class of tasks. To rigorously evaluate LLM performance, we introduce WorldGen, a dynamic benchmark for generating unseen SOPs with controllable complexity. Our initial findings show that while LLMs perform well on simpler SOPs, their effectiveness declines sharply as complexity increases. To address this, we draw inspiration from philosophical theories of reasoning—specifically, Hegelian Dialectics—and propose ACE, a dialectical framework that enhances LLM performance in SOPs without requiring retraining or fine-tuning.
Primary Area: optimization
Submission Number: 20422
Loading