Keywords: Belief Rule Base (BRB), Large Language Model (LLM), Explainable Artificial Intelligence (XAI), Semantic Rule Elicitation, Evidence Reasoning (ER), AI–Logic Integration
Abstract: Traditional Belief Rule Base (BRB) models possess inherent advantages in interpretability and uncertainty reasoning. However, their rule construction heavily depends on expert knowledge, leading to issues such as an excessive number of rules, parameter optimization difficulty, and limited scalability.
To address these challenges, this paper proposes a novel \textbf{LLM-Assisted Belief Rule Base (LLM-BRB)} classification framework. The proposed method leverages the semantic understanding and knowledge generation capabilities of Large Language Models (LLMs) to automatically construct the prior structure of belief rules. Through semantic consistency constraints and belief normalization mechanisms, the model forms an interpretable and logically coherent rule base, while an Evidence Reasoning (ER) fusion mechanism integrates multi-rule inference outcomes.
The proposed LLM-BRB framework demonstrates superior classification performance over the traditional BRB and other benchmark machine learning models, without compromising interpretability.
The proposed framework highlights the potential of the \textbf{AI + Logic} paradigm for building explainable and trustworthy decision-making models.
Submission Number: 18
Loading