Evaluating LLMs for Combinatorial Optimization: One-Phase and Two-Phase Heuristics for 2D Bin-Packing
Keywords: Large Language Models, Combinatorial Optimization, 2D Bin-Packing, Evolutionary Algorithms, Heuristic Solutions
TL;DR: We propose a framework to test LLMs on 2D bin-packing with evolutionary algorithms. GPT-4o beats traditional methods, reaching optimal solutions faster, cutting bins from 16→15, and boosting utilization from 0.76–0.78 to 0.83.
Abstract: This paper presents an evaluation framework for assessing Large Language Models' (LLMs) capabilities in combinatorial optimization, specifically addressing the 2D bin-packing problem. We introduce a systematic methodology that combines LLMs with evolutionary algorithms to generate and refine heuristic solutions iteratively. Through comprehensive experiments comparing LLM-generated heuristics against traditional approaches (Finite First-Fit and Hybrid First-Fit), we demonstrate that LLMs can produce more efficient solutions while requiring fewer computational resources. Our evaluation reveals that GPT-4o achieves optimal solutions within two iterations, reducing average bin usage from 16 to 15 bins while improving space utilization from 0.76-0.78 to 0.83. This work contributes to understanding LLM evaluation in specialized domains and establishes benchmarks for assessing LLM performance in combinatorial optimization tasks.
Submission Number: 248
Loading