Keywords: Combinatorial Optimization, Self-Supervised Learning, Graph Neural Networks
TL;DR: We demonstrate the benefits of using label-preserving augmentations for self-supervised SAT learning for both training and inference
Abstract: Data augmentations have been previously leveraged for neural SAT solvers to reduce the number of labeled instances that are required to successfully train a model. In this work, we show how data augmentations can be used to enhance neural SAT solver without access to any labeled instances. We conduct a theoretical analysis of their impact on the loss function in the self-supervised setting. Through extensive benchmarking, we establish the empirical benefits of those augmentations for both training and inference and compare them against several other augmentation techniques commonly found in the literature.
Submission Number: 48
Loading