Visualizing Deep Neural Network Decisions: Prediction Difference AnalysisDownload PDF

01 Mar 2021 (modified: 14 Feb 2017)ICLR 2017 conference submissionReaders: Everyone
  • TL;DR: Method for visualizing evidence for and against deep convolutional neural network classification decisions in a given input image.
  • Abstract: This article presents the prediction difference analysis method for visualizing the response of a deep neural network to a specific input. When classifying images, the method highlights areas in a given input image that provide evidence for or against a certain class. It overcomes several shortcoming of previous methods and provides great additional insight into the decision making process of classifiers. Making neural network decisions interpretable through visualization is important both to improve models and to accelerate the adoption of black-box classifiers in application areas such as medicine. We illustrate the method in experiments on natural images (ImageNet data), as well as medical images (MRI brain scans).
  • Keywords: Deep learning, Applications
  • Conflicts: uva.nl, vub.ac.be, cifar.ca, uwaterloo.ca, ru.nl, openai.com
16 Replies

Loading