EXPLAIN, AGREE and LEARN: A Recipe for Scalable Neural-Symbolic Learning

22 Sept 2023 (modified: 25 Mar 2024)ICLR 2024 Conference Withdrawn SubmissionEveryoneRevisionsBibTeX
Keywords: neuro-symbolic learning, variational inference, sampling, discrete latent variable model
TL;DR: Alternative paradigm for scaling neuro-symbolic learning
Abstract: Recent progress in neural-symbolic AI (NeSy) has demonstrated that neural networks can benefit greatly from an integration with symbolic reasoning methods in terms of interpretability, data-efficiency and generalisation performance. Unfortunately, the symbolic component can lead to intractable computations for more complicated domains. This computational bottleneck has prevented the successful application of NeSy to more practical problems. We present EXPLAIN, AGREE and LEARN, an alternative paradigm that addresses the scalability problem of probabilistic NeSy learning. EXPLAIN leverages sampling to obtain a representative set of possible explanations for the symbolic component driven by a newly introduced diversity criterion. Then AGREE assigns importance to the sampled explanations based on the neural predictions. This defines the learning objective, which for sufficiently many samples is guaranteed to coincide with the objective used by exact probabilistic NeSy approaches. Using this objective, LEARN updates the neural component with direct supervision on its outputs, without the need to propagate the gradient through the symbolic component. Our approximate paradigm and its theoretical guarantees are experimentally evaluated and shown to be competitive with existing exact probabilistic NeSy frameworks, while outperforming them in terms of speed.
Supplementary Material: pdf
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2024/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors' identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 5561
Loading