Interpretable Hierarchical Concept Reasoning through Graph Learning

ICLR 2026 Conference Submission20623 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Concept-based models, explainable AI, neurosymbolic
TL;DR: We introduce Hierarchical Concept Memory Reasoner (H-CMR), a novel CBM that combines rule learning, rule selection and graph learning to provide interpretability for both concepts and tasks while maintaining state-of-the-art accuracy.
Abstract: Concept-Based models (CBMs) are a class of deep learning models that provide interpretability by explaining predictions through high-level concepts. These models first predict concepts and then use them to perform a downstream task. However, current CBMs offer interpretability only for the final task prediction, while the concept predictions themselves are typically made via black-box neural networks. To address this limitation, we propose Hierarchical Concept Memory Reasoner (H-CMR), a new CBM that provides interpretability for both concept and task predictions. H-CMR models relationships between concepts using a learned directed acyclic graph, where edges represent logic rules that define concepts in terms of other concepts. During inference, H-CMR employs a neural attention mechanism to select a subset of these rules, which are then applied hierarchically to predict all concepts and the final task. Experimental results demonstrate that H-CMR matches state-of-the-art performance while enabling strong human interaction through concept and model interventions. The former can significantly improve accuracy at inference time, while the latter can enhance data efficiency during training when background knowledge is available.
Primary Area: interpretability and explainable AI
Submission Number: 20623
Loading