Keywords: neurosymbolic learning, scalability, vectorization, differentiable reasoning
Abstract: Neurosymbolic learning has emerged as a promising paradigm to incorporate
symbolic reasoning into deep learning models.
However, existing frameworks are limited in scalability with respect to both
the training data and the complexity of symbolic programs.
We propose Dolphin, a framework to scale neurosymbolic learning at a fundamental level by mapping both forward chaining and backward gradient propagation in symbolic programs
to vectorized computations.
For this purpose, Dolphin introduces a set of abstractions and primitives
built directly on top of a high-performance deep learning framework like
PyTorch, effectively enabling symbolic programs to be written as PyTorch modules.
It thereby enables neurosymbolic programs to be written in a language like Python that is familiar to developers and compile them to computation graphs that are amenable to end-to-end differentiation on GPUs.
We evaluate Dolphin on a suite of 13 benchmarks across 5 neurosymbolic tasks that combine deep learning models for
text, image, or video processing with symbolic programs that involve multi-hop
reasoning, recursion, and even black-box functions like Python `eval()`.
Dolphin achieves comparable or better accuracy on all benchmarks while taking 0.33% -- 61.73% of the time (and 23.23% on average) to train these models on the largest input per task compared to baselines Scallop, ISED, and IndeCateR+, which time out on most of these inputs.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10020
Loading