TL;DR: Based on known methodological caveats of the Model Parameter Randomisation Test (MPRT), we introduce two adaptations---Smooth MPRT and Efficient MPRT, for more reliable XAI evaluation.
Abstract: The Model Parameter Randomisation Test (MPRT) is widely acknowledged in the eXplainable Artificial Intelligence (XAI) community for its well-motivated evaluative principle: that the explanation function should be sensitive to changes in the parameters of the model function. However, recent works have identified several methodological caveats for the empirical interpretation of MPRT. To address these caveats, we introduce two adaptations to the original MPRT — Smooth MPRT and Efficient MPRT, where the former minimises the impact that noise has on the evaluation results through sampling and the latter circumvents the need for biased similarity measurements by re-interpreting the test through the explanation’s rise in complexity, after full parameter randomisation. Our experimental results demonstrate that these proposed variants lead to improved metric reliability, thus enabling a more trustworthy application of XAI methods
Submission Track: Full Paper Track
Application Domain: Computer Vision
Survey Question 1: To be able to trust XAI methods for real-world applications, they need to be faithful to the model behaviour and sensitive to its parameters. The MPRT aims to evaluate this quality for different explanation methods---however, several methodological shortcomings of this critical test have recently been uncovered. Nevertheless, since the principle guiding this test is intrinsically well-motivated, we propose two extensions of the original formulation, to address identified shortcomings and demonstrate their increased reliability.
Survey Question 2: Incorporating explainability into our approach is critical since we employ vision models that range between 60 thousand to 38 million learned parameters, in order to solve its classification task. For this reason, explainability becomes instrumental in understanding a given model prediction. Our focus is to explore the necessary methodological requirements for applying (and evaluating) XAI methods more widely in various ML applications, which largely has been left unexplored by the XAI community.
Survey Question 3: In our work, we utilise different local attribution-based explanation methods (LRP-ε, LRP-z+, GradCAM, Saliency, GradientShap, IntegratedGradients, SmoothGrad, InputXGradient, Guided Backpropagation, Gradient) to understand model predictions.
Submission Number: 64
Loading