Learning General Representations Across Graph Combinatorial Optimization Problems

ICLR 2025 Conference Submission2509 Authors

22 Sept 2024 (modified: 19 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Combinatorial Optimization, Contrastive Learning, Representation Learning
TL;DR: We propose a paradigm to enhance representation learning for CO via capturing the underlying commonalities across multiple graph CO problems.
Abstract: Combinatorial optimization (CO) problems are classical and crucial in many fields, with many NP-complete (NPC) examples being reducible to one another, revealing an underlying connection between them. Existing methods, however, primarily focus on task-specific models trained on individual datasets, limiting the quality of learned representations and the transferability to other CO problems. Given the reducibility among these problems, a natural idea is to abstract a higher-level representation that captures the essence shared across different problems, enabling knowledge transfer and mutual enhancement. In this paper, we propose a novel paradigm CORAL that treats each CO problem type as a distinct modality and unifies them by transforming all instances into representations of the fundamental Boolean satisfiability (SAT) problem. Our approach aims to capture the underlying commonalities across multiple problem types via cross-modal contrastive learning with supervision, thereby enhancing representation learning. Extensive experiments on seven graph decision problems (GDPs) demonstrate the effectiveness of CORAL, showing that our approach significantly improves the quality and generalizability of the learned representations. Furthermore, we showcase the utility of the pre-trained unified SAT representations on related tasks, including satisfying assignment prediction and unsat core variable prediction, highlighting the potential of CORAL as a unified pre-training paradigm for CO problems.
Primary Area: unsupervised, self-supervised, semi-supervised, and supervised representation learning
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 2509
Loading