TL;DR: We show in theory and in practice that combining multiple explanation methods for DNN benefits the explanation.
Abstract: Despite a growing literature on explaining neural networks, no consensus has been reached on how to explain a neural network decision or how to evaluate an explanation.
Our contributions in this paper are twofold. First, we investigate schemes to combine explanation methods and reduce model uncertainty to obtain a single aggregated explanation. The aggregation is more robust and aligns better with the neural network than any single explanation method..
Second, we propose a new approach to evaluating explanation methods that circumvents the need for manual evaluation and is not reliant on the alignment of neural networks and humans decision processes.
Code: https://drive.google.com/drive/folders/1ZWozeTQoLni13rltt6JvLYXEEsGBEF3X?usp=sharing
Keywords: explainability, deep learning, interpretability, XAI
Original Pdf: pdf
11 Replies
Loading