Using Clause Predictions for Learning-Augmented Constraint Satisfaction

ICLR 2026 Conference Submission22394 Authors

20 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Learning-augmented algorithms, Learning augmented algorithms, predictions, constraing satisfaction
TL;DR: We use clause predictions to solve fundamental constraint satisfaction problems beyond worst case guarantees
Abstract: We continue a recent flourishing line of work on studying NP-hard problems with predictions and focus on fundamental constraint satisfaction problems such as Max-E3SAT and its weighted variant. Max-E3SAT is the natural `maximizing' generalization of 3SAT, where we want to find an assignment to maximize the number of satisfied clauses. We introduce a clause prediction model, where each clause provides one noisy bit (accurate with probability $1/2 + \varepsilon$) of information for each variable participating in the clause, based on an optimal assignment. We design an algorithm with approximation factor of $7/8+\Theta(\varepsilon^2/\log(1/\varepsilon))$. Our algorithm leverages the fact that in our model, high-occurrence variables tend to be highly predictable. By carefully incorporating a classic algorithm for Max-E3SAT with bounded-occurrence, we are able to bypass the worst-case lower bounds of $7/8$ without advice (assuming $P \ne NP$). We also give hardness results of Max-E3SAT in other well studied prediction models such as the $\varepsilon$-label and subset prediction models of Cohen-Addad et al. (NeurIPS 2024) and Ghoshal et al. (SODA 2025). In particular, under standard complexity assumptions, in these prediction models, we show Max-E3SAT is hard to approximate to within a factor of $7/8+\delta$ and Max-E3SAT with bounded-occurrence $B$ (every variable appears in at most $B$ clauses) is hard to approximate to within a factor of $7/8+O(1/\sqrt{B})+\delta$ for $\delta$ a specific function of $\varepsilon$. Our first lower bound result is based on the framework proposed by Ghoshal et al. (SODA 2025), and the second uses a randomized reduction from general instances of Max-E3SAT to bounded-occurrences instances proposed by Trevisan (STOC 2001).
Primary Area: learning theory
Submission Number: 22394
Loading