Interpretations are useful: penalizing explanations to align neural networks with prior knowledgeDownload PDF

25 Sept 2019 (modified: 22 Oct 2023)ICLR 2020 Conference Blind SubmissionReaders: Everyone
Keywords: explainability, deep learning, interpretability, computer vision
TL;DR: Explanations are useful now! We introduce CDEP, a technique for penalizing explanations in order to improve predictive accuracy.
Abstract: For an explanation of a deep learning model to be effective, it must provide both insight into a model and suggest a corresponding action in order to achieve some objective. Too often, the litany of proposed explainable deep learning methods stop at the first step, providing practitioners with insight into a model, but no way to act on it. In this paper, we propose contextual decomposition explanation penalization (CDEP), a method which enables practitioners to leverage existing explanation methods in order to increase the predictive accuracy of deep learning models. In particular, when shown that a model has incorrectly assigned importance to some features, CDEP enables practitioners to correct these errors by directly regularizing the provided explanations. Using explanations provided by contextual decomposition (CD) (Murdoch et al., 2018), we demonstrate the ability of our method to increase performance on an array of toy and real datasets.
Code: https://drive.google.com/drive/folders/16XHi-Onen2gjOvRx3qIUP1Z-3SvrAY4P?usp=sharing
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/arxiv:1909.13584/code)
Original Pdf: pdf
12 Replies

Loading