Test-Time Adaptation for Unsupervised Combinatorial Optimization

18 Sept 2025 (modified: 22 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Combinatorial Optimization, Unsupervised Learning, Test-Time Adaptation, Graph Neural Networks
Abstract: Neural combinatorial optimization (NCO) has emerged as a data-driven alternative to classical solvers, with recent advances in unsupervised learning (UL) frameworks enabling training without ground truth solutions. However, current UL-based NCO approaches tend to emphasize either generalization across diverse problem instances or instance-specific optimization. In this work, we introduce TACO, a model-agnostic test-time adaptation framework that unifies and extends these two paradigms through principled warm-starting: beginning from a trained, generalizable NCO model and applying instance-specific model updates. Crucially, compared to naively fine-tuning a trained generalizable model or optimizing an instance-specific model from scratch, TACO achieves better solution quality while incurring negligible additional computational cost. Our method integrates seamlessly into existing UL-based NCO pipelines. Experiments on canonical CO problems, Minimum Vertex Cover and Maximum Clique, demonstrate the effectiveness and robustness of TACO across static, distribution-shifted, and dynamic settings, establishing its broad applicability and practical impact.
Primary Area: optimization
Submission Number: 10451
Loading