Abstract: Traditional neural networks have an impressive classification performance, but what they learn cannot be inspected, verified or extracted. Neural Logic Networks on the other hand have an interpretable structure that enables them to learn a logical mechanism relating the inputs and outputs with AND and OR operations. We generalize these networks with NOT operations and biases that take into account unobserved data and develop a rigorous logical and probabilistic modeling in terms of concept combinations to motivate their use. We also propose a novel factorized IF-THEN rule structure for the model as well as a modified learning algorithm. Our method improves the state-of-the-art in Boolean networks discovery and is able to learn relevant, interpretable rules in tabular classification.
Submission Length: Long submission (more than 12 pages of main content)
Changes Since Last Submission: Second revision in response to the 3 reviews.
Includes:
- a new initialization in the method that produces all around slightly better results
- a new dataset with high dimensionality (450 contiunous input features)
- many changes in the text (notation details, clarification of ideas, more insights on the method, correction of typos, etc.)
- more formal definitions in a few instances that were previously described only narratively
- some restructuring of subsections (a subsection on interpretation from the appendix was moved to the main text)
- a new paragraph in related works on probabilistic graphical models and probabilistic circuits
- a final paragraph in the conclusion describing possible future research directions for adapting NLNs to convolutional NLN, recurrent NLN an graph NLN
Assigned Action Editor: ~Stefano_Teso1
Submission Number: 3873
Loading