Marginal Probability Explanation: A Saliency Map with Closed-loop ValidationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: MPE, saliency map, closed-loop validation, typical sample
TL;DR: We propose a saliency map using marginal probability for each input dimension whose meaningfulness can be closed-loop validated.
Abstract: In this work, we propose a saliency map with pixel-level resolution, called Marginal Probability Explanation (MPE), for a black-box classifier. MPE visualizes the contribution of each input dimension to the classifier by calculating marginal probabilities when only one dimension is considered. Marginal probabilities are estimated using Monte Carlo by sampling from the training dataset. Based on MPE, we propose typical samples, i.e. samples that maximize their marginal probability in every input dimension. We verify that our proposed MPE is meaningful through closed-loop validation experiments, where replacing a few pixels with the lowest marginal probability with pixels in the typical sample ``corrects" the classification. Based on experiments, we found deep neural networks are probably still using pixel-level logics for image classification. Moreover, the critical pixels are not necessary related to the subject.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
5 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview