ALIGNING LLMS WITH GRAPH NEURAL SOLVERS FOR COMBINATORIAL OPTIMIZATION

ICLR 2026 Conference Submission16650 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Combinatorial Optimization, Large language models
TL;DR: We propose AlignOPT, a novel method that aligns LLMs with graph neural solvers to achieve more scalable and generalizable solutions for combinatorial optimization problems.
Abstract: Recent research has demonstrated the effectiveness of large language models (LLMs) in solving combinatorial optimization problems (COPs) by representing tasks and instances in natural language. However, purely language-based approaches struggle to accurately capture complex relational structures inherent in many COPs, rendering them less effective at addressing medium-sized or larger instances (e.g., problem sizes greater than 30). To address these limitations, we propose AlignOPT, a novel approach that aligns LLMs with graph neural solvers for learning a more generalizable neural COP heuristic. Specifically, AlignOPT leverages the semantic understanding capabilities of LLMs to encode textual descriptions of COPs and their instances while concurrently exploiting graph neural solvers to explicitly model the underlying graph structures of COP instances. Our approach facilitates a robust integration and alignment between linguistic semantics and structural representations, enabling more accurate and scalable COP solutions. Experimental results demonstrate that AlignOPT achieves state-of-the-art results across diverse COPs, underscoring its effectiveness in aligning semantic and structural representations. Additionally, AlignOPT exhibits strong generalization capabilities, successfully extending to previously unseen COP instances.
Primary Area: optimization
Submission Number: 16650
Loading