Distilling Neural Knowledge into Interpretable Belief Rule Bases

Published: 15 Nov 2025, Last Modified: 08 Mar 2026AAAI 2026 Bridge LMReasoningEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Belief Rule Base; Backpropagation; Knowledge Distillation; Rule Center; Interpretability
Abstract: In recent years, deep learning has achieved remarkable progress in domains such as image recognition, natural language processing, and speech understanding. However, its inherent “black-box” nature restricts interpretability and undermines trust. As a representative symbolic reasoning method, the Belief Rule Base (BRB) offers strong interpretability and transparent inference for complex, uncertain decision-making. Nevertheless, traditional BRB models rely heavily on manually defined rules and parameters, which limits their scalability to large, data-driven tasks. To address this limitation, we propose a knowledge-distillation-based neuro-symbolic framework, termed Rule Distillation, in which a deep neural network acts as the teacher model to guide the training of a parameterized BRB student model. In this framework, rule weights, attribute weights, rule centers, and consequent belief distributions are treated as trainable parameters optimized via gradient descent. Simultaneously, the soft labels generated by the teacher model provide supervisory signals that enable the student model to capture complex class distributions effectively. Extensive experiments on 23 public datasets demonstrate that the proposed parameterized BRB not only inherits the predictive performance of its teacher model but also achieves faster convergence and stronger generalization, while maintaining interpretability. Overall, this study presents an effective pathway toward explainable artificial intelligence (XAI) by balancing predictive performance with model transparency.
Submission Number: 14
Loading