Evaluation of Feature-based explanations

Anonymous

17 Jan 2022 (modified: 05 May 2023)Submitted to BT@ICLR2022Readers: Everyone
Keywords: Explainableai, evaluation
Abstract: This blog post describes contribution of the paper titled "EVALUATIONS AND METHODS FOR EXPLANATION THROUGH ROBUSTNESS ANALYSIS" by [Cheng et. al](https://openreview.net/forum?id=Hye4KeSYDr) that discusses assessing robustness in a novel way and coming up with more robust explanation in the specific area of insertion and removal of explanations. This is because such explanations face two drawbacks: 1. When the feature importance is estimated by removing a feature by setting it to a baseline value, it has higher chance to attribute high importance if some values deflect a lot from baseline. An example would be setting RGB pixels to black, that will give bright pixels more importance. 2. When the feature importance is estimated by removing a feature by giving it some value sampled from the distribution (using a generative model) , there is an inherent bias that goes from the generative model to this process and not all domains can have a proper generative models.
Submission Full: zip
Blogpost Url: yml
ICLR Paper: https://openreview.net/forum?id=Hye4KeSYDr
2 Replies

Loading