Towards More Stable LIME For Explainable AI

Published: 01 Jan 2022, Last Modified: 10 Nov 2024ISPACS 2022EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Although AI's development is remarkable, end users do not know how the AI has come to a specific conclusion due to the black-box nature of AI algorithms like deep learning. This has given rise to the field of explainable AI (XAI) where techniques are being developed to explain AI algorithms. One such technique is called Local Interpretable Model-Agnostic Explanations (LIME). LIME is popular because it is modelagnostic and works well with text, tabular and image data. While it has some good features, there are still room for improvements towards the original LIME algorithm especially it's stability. In this work, the LIME stability is being reviewed and three different approaches were investigated for its effectiveness in stability improvement which are; 1) using high sample size for stable ordering, 2) using an averaging method to reduce region flipping; and 3) to evaluate different super-pixels segmentation algorithms in generating stable LIME outcome. The experiment results shows a definite increase in the stability of the improved LIME compared to the baseline LIME and thus the reliability of using it practically.
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview