Keywords: Neurosymbolic, Deep Learning, Knowledge Representation, Probabilistic Reasoning
TL;DR: We introduce a unified formalism for probabilistic neurosymbolic techniques to tackle classification tasks informed with prior knowledge.
Abstract: Neurosymbolic AI is a growing field of research aiming to combine neural network learning capabilities with the reasoning abilities of symbolic systems. In this paper, we tackle informed classification tasks, i.e. multi-label classification tasks informed by prior knowledge that specifies which combinations of labels are semantically valid. Several neurosymbolic formalisms and techniques have been introduced in the literature, each relying on a particular language to represent prior knowledge. We take a bird's eye view on informed classification and introduce a unified formalism that encapsulates all knowledge representation languages. Then, we build upon this formalism to identify several concepts in probabilistic reasoning that are at the core of many techniques across representation languages. We also define a new technique called semantic conditioning at inference, which only constrains the system during inference while leaving the training unaffected, an interesting property in the era of off-the-shelves and foundation models. We discuss its theoritical and practical advantages over two other probabilistic neurosymbolic techniques: semantic conditioning and semantic regularization. We then evaluate experimentally and compare the benefits of all three techniques on several large-scale datasets. Our results show that, despite only working at inference, our technique can efficiently leverage prior knowledge to build more accurate neural-based systems.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Reciprocal Reviewing: I understand the reciprocal reviewing requirement as described on https://iclr.cc/Conferences/2025/CallForPapers. If none of the authors are registered as a reviewer, it may result in a desk rejection at the discretion of the program chairs. To request an exception, please complete this form at https://forms.gle/Huojr6VjkFxiQsUp6.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 1108
Loading