Reproducibility Study of “Are Your Explanations Reliable?” Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial Attack
Abstract: Abstract
This work investigates the reproducibility of “Are Your Explanations Reliable?” Investigating the Stability of LIME in Explaining Text Classifiers by Marrying XAI and Adversarial Attack by Burger et al. (2023). Our objective is to replicate and verify this paper’s findings. The provided code by the authors is utilised as a foundation, missing segments and substantial additions are implemented by us. Our work suggests that the inherent instability claim is only partially reproducible due to unspecified hyperparameters in the paper. Nonetheless, we successfully reproduced and extended the results regarding the choice of RBO as similarity measure. Lastly, the third claim was partially reproducible due to constrained computational resources. However, we could verify the third claim by observing similar trends on a small subset of the test data. In conclusion, all claims are supported in varying degrees through our reproducibility study.
Submission Length: Regular submission (no more than 12 pages of main content)
Previous TMLR Submission Url: https://openreview.net/forum?id=UGwWjbIiea
Changes Since Last Submission: The change in (format) abstract, now also in pdf.
Assigned Action Editor: ~Shinichi_Nakajima2
Submission Number: 2451
Loading