Keywords: Inductive Logic Programming, Rule Induction, Neuro-Symbolic
Abstract: Inductive Logic Programming (ILP) learns logical rules from data, forming an interpretable machine learning model.
Early-stage symbolic ILP systems perform outstandingly on small-scale tasks but suffer from combinatorial explosion.
Emerging neuro-symbolic ILP methods demonstrate a certain degree of scalability and are more robust to noisy data.
However, existing neuro-symbolic ILP methods are limited to constrained language biases, hampering further scalability.
In this work, we propose Forward Chaining Neural Network (FCNN), a stochastic neural network that can learn logical rules under any language bias.
FCNN relaxes all syntactically correct rules into continuous spaces and searches for the semantically correct solutions via gradient-based optimization.
Experiments on standard evaluation tasks and recently proposed large-scale tasks show that FCNN outperforms existing methods.
Supplementary Material: zip
Primary Area: neurosymbolic & hybrid AI systems (physics-informed, logic & formal reasoning, etc.)
Submission Number: 9050
Loading