BELIEF - Bayesian Sign Entropy Regularization for LIME Framework

Revoti Prasad Bora, Philipp Terhörst, Raymond N. J. Veldhuis, Raghavendra Ramachandra, Kiran Bylappa Raja

Published: 2025, Last Modified: 03 Mar 2026UAI 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Explanations of Local Interpretable Model-agnostic Explanations (LIME) are often inconsistent across different runs making them unreliable for eXplainable AI (XAI). The inconsistency stems from sign flips and variability in ranks of the segments for each different run. We propose a Bayesian Regularization approach to reduce sign flips, which in turn stabilizes feature rankings and ensures significantly higher consistency in explanations. The proposed approach enforces sparsity by incorporating a Sign Entropy prior on the coefficient distribution and dynamically eliminates features during optimization. Our results demonstrate that the explanations from the proposed method exhibit significantly better consistency and fidelity than LIME (and its earlier variants). Further, our approach exhibits comparable consistency and fidelity with a significantly lower execution time than the latest LIME variant, i.e., SLICE (CVPR 2024).
Loading