Self-Supervised Transformers as Iterative Solution Improvers for Constraint Satisfaction

Published: 01 May 2025, Last Modified: 18 Jun 2025ICML 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
TL;DR: A self-supervised Transformer-based framework for iteratively solving Constraint Satisfaction Problems.
Abstract: We present a Transformer-based framework for Constraint Satisfaction Problems (CSPs). CSPs find use in many applications and thus accelerating their solution with machine learning is of wide interest. Most existing approaches rely on supervised learning from feasible solutions or reinforcement learning, paradigms that require either feasible solutions to these NP-Complete CSPs or large training budgets and a complex expert-designed reward signal. To address these challenges, we propose ConsFormer, a self-supervised framework that leverages a Transformer as a solution refiner. ConsFormer constructs a solution to a CSP iteratively in a process that mimics local search. Instead of using feasible solutions as labeled data, we devise differentiable approximations to the discrete constraints of a CSP to guide model training. Our model is trained to improve random assignments for a single step but is deployed iteratively at test time, circumventing the bottlenecks of supervised and reinforcement learning. Experiments on Sudoku, Graph Coloring, Nurse Rostering, and MAXCUT demonstrate that our method can tackle out-of-distribution CSPs simply through additional iterations.
Lay Summary: Solving problems under specific rules and restrictions is part of many real-life tasks, from completing puzzles like Sudoku to scheduling employee shifts. These problems are often hard to solve, and even the best traditional methods can struggle as the problems grow larger and more complex. Artificial intelligence has been used to help tackle these problems more efficiently. However, many existing methods rely on having examples of good solutions or require extensive trial and error, which can be slow or impractical. We introduce ConsFormer which takes a different approach. It trains an AI model to make small improvements to a solution in a single step, without needing correct answers during training. When deployed, ConsFormer is repeatedly used to make steady improvements, starting from a random guess and refining it step by step. ConsFormer works across different problems and can handle more challenging instances simply by running more improvement steps. This makes it a promising tool for solving complex real-world constraint reasoning problems efficiently.
Link To Code: https://github.com/khalil-research/ConsFormer
Primary Area: Optimization->Discrete and Combinatorial Optimization
Keywords: Constraint Satisfaction Problem, Transformer, Self-supervised Learning, Local Search
Submission Number: 13215
Loading