Explaining Neural Networks Semantically and Quantitatively

Sep 27, 2018 ICLR 2019 Conference Withdrawn Submission readers: everyone
  • Abstract: This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically. How to analyze the specific rationale of each prediction made by the CNN presents one of key issues of understanding neural networks, but it is also of significant practical values in certain applications. In this study, we propose to distill knowledge from the CNN into an explainable additive model, so that we can use the explainable model to provide a quantitative explanation for the CNN prediction. We analyze the typical bias-interpreting problem of the explainable model and develop prior losses to guide the learning of the explainable additive model. Experimental results have demonstrated the effectiveness of our method.
  • Keywords: Network interpretability, deep learning, knowledge distillation, convolutional neural networks
  • TL;DR: This paper presents a method to explain the knowledge encoded in a convolutional neural network (CNN) quantitatively and semantically.
0 Replies

Loading