Toward Learning Generalized Cross-Problem Solving Strategies for Combinatorial Optimization

ICLR 2025 Conference Submission5774 Authors

26 Sept 2024 (modified: 13 Oct 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Neural Combinatorial Optimization, Multi-task Learning
Abstract: Combinatorial optimization (CO) problems are fundamental across various domains, with many sharing similarities in optimization objectives, decision variables, and constraints. Many traditional algorithms perform well on related problems using similar solution strategies, highlighting the commonality in solving different problems. However, most machine learning approaches treat each CO problem in isolation, failing to capitalize on the underlying relationships between problems. In this paper, we investigate the potential to learn generalized solving strategies that capture the shared structure among different CO problems, enabling easier adaptation to related tasks. To this end, we propose to first divide the model architecture into three components: a header, an encoder, and a decoder; where The header and decoder address problem-specific inputs and outputs, while the encoder is designed to learn shared strategies that generalize across different problems. To ensure this, we enforce alignment in the optimization directions of the encoder across problems, maintaining consistency in both gradient directions and magnitudes to harmonize optimization processes. This is achieved by introducing the additional problem-specific rotation matrices and loss weights to steer the gradients, which are updated via a gradient consistency loss. Extensive experiments on six CO problems demonstrate that our method enhances the model's ability to capture shared solving strategies across problems. We show that the learned encoder on several problems can directly perform comparably on new problems to models trained from scratch, highlighting its potential to support developing the foundational model for combinatorial optimization. Source code will be made publicly available.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5774
Loading