Foundation Models for Boolean Logic

ICLR 2025 Conference Submission13595 Authors

28 Sept 2024 (modified: 28 Nov 2024)ICLR 2025 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Boolean logic, runtime prediction, graph neural networks, multi-task learning, foundation models
TL;DR: For the first time, we trained a foundation model for Boolean logic by training a graph neural network model end to end to jointly predict twelve different tasks.
Abstract: Boolean logic is fundamental to solving various computational problems, such as Boolean satisfiability (SAT) and model counting, but existing machine learning (ML) approaches for automating algorithm design are computationally expensive and data-intensive. We propose the first foundation model for Boolean logic, leveraging a multi-task dataset of one million instances spanning sixteen tasks and using graph neural networks (GNNs). We evaluated the generalization of the foundation models on held-out tasks; we found that models fine-tuned from the foundation model were substantially more sample efficient and converged much faster than models trained from scratch. We identified a number of crucial design components for training these models, in particular the choice of normalization layer. We showed that a hybrid of different normalization techniques across layers is much more effective than any single normalization layer.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 13595
Loading