Explaining Neural Networks Semantically and QuantitativelyDownload PDF

27 Sept 2018 (modified: 05 May 2023)ICLR 2019 Conference Withdrawn SubmissionReaders: Everyone
Abstract: This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically. How to analyze the specific rationale of each prediction made by the CNN presents one of key issues of understanding neural networks, but it is also of significant practical values in certain applications. In this study, we propose to distill knowledge from the CNN into an explainable additive model, so that we can use the explainable model to provide a quantitative explanation for the CNN prediction. We analyze the typical bias-interpreting problem of the explainable model and develop prior losses to guide the learning of the explainable additive model. Experimental results have demonstrated the effectiveness of our method.
Keywords: Network interpretability, deep learning, knowledge distillation, convolutional neural networks
TL;DR: This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.
8 Replies

Loading