Keywords: Neuro-Symbolic Learning, Multinomial Mixture Distribution
Abstract: Neuro-symbolic learning (NSL) aims to integrate neural networks with symbolic reasoning approaches to enhance the interpretability of machine learning models. Existing methods mostly focus on the long dependency problem of symbolic learning. The important challenge of complex categorization is largely overlooked. To bridge this gap, we propose the Mixed Multinomial Distribution-based NSL MMD-NSL framework. It seamlessly integrates the handling of long dependency chains and complex semantic categorization within Knowledge Graphs (KGs). By introducing a continuous Mixed Multinomial Logic Semantic Distribution, we extend traditional Markov Logic Networks (MLN) to incorporate context-aware semantic embeddings. Our theoretical innovations, including a bijective mapping between MLNs and continuous multinomial distributions, enable the capture of intricate dependencies and varied contexts crucial for NSL tasks.
The framework leverages a bilevel optimization strategy, where a transformer-based upper level dynamically learns mixing coefficients akin to attention mechanisms, while the lower level optimizes rule weights for learning both context and rule patterns. Extensive experiments on the DWIE benchmarking datasets demonstrate significant advantages of MMD-NSL over four state-of-the-art approaches. It achieves 10.47% higher F1-scores on average than the best-performing baseline across 23 sub-datasets. It advances continuous probabilistic models for neuro-symbolic reasoning and complex relational tasks.
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 10956
Loading