Keywords: Concepts, first-order logic, differentiable logic, interpretability
TL;DR: We propose a differentiable logic-based module to incorporate the relations between concepts in the learning process.
Abstract: Concept-based models promote learning in terms of high-level transferrable abstractions. These models offer one additional level of transparency compared to a black box model, as the predictions are a weighted combination of concepts. The relations between concepts are a rich source of information that would compliment learning. We propose using the propositional logic derived from the concepts to model these relations and to address the expressivity-vs-interpretability tradeoff in these models. Three architectural variants that give rise to logic-enhanced models are introduced. We analyse several ways of training them and experimentally show that logic-enhanced concept-based models give better concept alignment and interpretability, while not loosing out on performance. These models allow for a richer formal expression of predictions, paving the way for logical reasoning with symbolic concepts.
Submission Number: 35
Loading