Keywords: neuro-symbolic learning, neural theorem proving, explainability, logic, learning, Neural program synthesis
TL;DR: Extracting human-readable proof steps for explaining the reasoning process of Logical Neural Networks
Abstract: Automated Theorem Provers (ATPs) are widely used for the verification of logicalstatements. Explainability is one of the key advantages of ATPs: providing anexpert readable proof path which shows the inference steps taken to concludecorrectness. Conversely, Neuro-Symbolic Networks (NSNs) that perform theoremproving, do not have this capability. We propose a proof-tracing and filteringalgorithm to provide explainable reasoning in the case of Logical Neural Networks(LNNs), a special type of Neural-Theorem Prover (NTP).