BELIEF - Bayesian Sign Entropy Regularization for LIME Framework

Published: 07 May 2025, Last Modified: 13 Jun 2025UAI 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Sign Entropy Regularization, LIME, BayLIME, SLICE, XAI, Consistent Explanation
Abstract: Explanations of Local Interpretable Model-agnostic Explanations (LIME) are often inconsistent across different runs making them unreliable for eXplainable AI (XAI). The inconsistency stems from sign flips and variability in ranks of the segments for each different run. We propose a Bayesian Regularization approach to reduce sign flips, which in turn stabilizes feature rankings and ensures significantly higher consistency in explanations. The proposed approach enforces sparsity by incorporating a Sign Entropy prior on the coefficient distribution and dynamically eliminates features during optimization. Our results demonstrate that the explanations from the proposed method exhibit significantly better consistency and fidelity than LIME (and its earlier variants). Further, our approach exhibits comparable consistency and fidelity with a significantly lower execution time than the latest LIME variant, i.e., SLICE (CVPR 2024).
Supplementary Material: zip
Latex Source Code: zip
Code Link: https://github.com/rebathip/BELIEF.git
Signed PMLR Licence Agreement: pdf
Readers: auai.org/UAI/2025/Conference, auai.org/UAI/2025/Conference/Area_Chairs, auai.org/UAI/2025/Conference/Reviewers, auai.org/UAI/2025/Conference/Submission397/Authors, auai.org/UAI/2025/Conference/Submission397/Reproducibility_Reviewers
Submission Number: 397
Loading