Abstract: Traditional neural networks have an impressive classification performance, but what they learn cannot be inspected, verified or extracted. Neural Logic Networks on the other hand have an interpretable structure that enables them to learn a logical mechanism relating the inputs and outputs with AND and OR operations. We generalize these networks with NOT operations and biases that take into account unobserved data and develop a rigorous logical and probabilistic modeling in terms of concept combinations to motivate their use. We also propose a novel factorized IF-THEN rule structure for the model as well as a modified learning algorithm. Our method improves the state-of-the-art in Boolean networks discovery and is able to learn relevant, interpretable rules in tabular classification, notably on examples from the medical and industrial fields where interpretability has tangible value.
Submission Type: Long submission (more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=FxdpxfH02l
Changes Since Last Submission: * A natural extension for inputs with missing values was included into the interpretable structure of the Neural Logic Network (NLN).
* The NLN learning pipeline was extended with a final bias adjustment step, in which the final biases of the rules and logic programs are estimated according to their statistical definition.
* The experiments section on interpretable tabular classification was completed to provide further explanation on why the NLN's performance is sometimes significantly below one of its comparable model's performance. This was done through an analysis on interpretability.
* A further analysis on the proportion of the ground-truth that the models recovered was also added, showing how NLN recovers more than the compared models.
* We added two new data sets in fields (medicine, cyber-security) where interpretability has tangible value to show the potential of our approach. In particular, the data set in cyber-security is much more challenging with around 150K samples. In these spotlight applications, we also show how pruning a NLN can be used to merge multiple models (resulting in a 4-rule perfect classifier for the medicine example) or to do transfer learning (transferring the cyber-security from the learned binary classification task to a new multiclass setting).
Assigned Action Editor: ~antonio_vergari2
Submission Number: 6991
Loading