Scaling Laws for Neural Combinatorial Optimization with LLaMA Models

Published: 04 Oct 2025, Last Modified: 21 Nov 2025DiffCoAlg 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: combinatorial optimization, scaling laws, large language models, neural algorithmic reasoning, NP-hard problems, LLaMA
TL;DR: We establish the first scaling laws for LLMs on combinatorial optimization, finding a universal three-phase pattern across TSP, Knapsack, and SAT with a 17B sweet spot.
Abstract: We present the first comprehensive scaling study of large language models on combinatorial optimization problems, establishing universal scaling laws across Traveling Salesman Problem (TSP), 0/1 Knapsack, and Boolean Satisfiability (SAT). Through 4,829 experiments across three model sizes (8B, 17B, 70B parameters), we discover a three-phase scaling pattern: Emergence (8B), Stability (17B), and Improvement (70B). Our results demonstrate that 17B models achieve optimal balance with 78.9\% solution quality and 100\% feasibility, outperforming classical heuristics by 134.6\% (p < 0.001). The universal scaling patterns across different NP-hard problems suggest LLMs develop general optimization principles rather than problem-specific strategies, opening new research directions in neural algorithm theory.
Submission Number: 32
Loading