Evaluation of Attribution Explanations without Ground TruthDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: Interpretable machine learning, Explainable AI
TL;DR: This paper proposes a metric to evaluate the objectiveness of explanation methods without a need for the ground-truth explanations.
Abstract: This paper proposes a metric to evaluate the objectiveness of explanation methods of neural networks, i.e., the accuracy of the estimated importance/attribution/saliency values of input variables. This is crucial for the development of explainable AI, but it also presents significant challenges. The core challenge is that people usually cannot obtain the ground-truth value of the attribution of each input variable. Thus, we design a metric to evaluate the objectiveness of the attribution map without ground truth. Our metric is used to evaluate eight benchmark methods of attribution explanations, which provides new insights into attribution methods. We will release the code when the paper is accepted.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading