Explaining the Mistakes of Neural Networks with Latent Sympathetic ExamplesDownload PDF

15 Feb 2018 (modified: 15 Feb 2018)ICLR 2018 Conference Blind SubmissionReaders: Everyone
Abstract: Neural networks make mistakes. The reason why a mistake is made often remains a mystery. As such neural networks often are considered a black box. It would be useful to have a method that can give an explanation that is intuitive to a user as to why an image is misclassified. In this paper we develop a method for explaining the mistakes of a classifier model by visually showing what must be added to an image such that it is correctly classified. Our work combines the fields of adversarial examples, generative modeling and a correction technique based on difference target propagation to create an technique that creates explanations of why an image is misclassified. In this paper we explain our method and demonstrate it on MNIST and CelebA. This approach could aid in demystifying neural networks for a user.
TL;DR: New way of explaining why a neural network has misclassified an image
Keywords: Deep learning, Adversarial Examples, Difference Target Propagation, Generative Modelling, Classifiers, Explaining, Sympathetic Examples
8 Replies

Loading