Discriminative Attribution from Paired ImagesDownload PDF

Anonymous

14 Jul 2022 (modified: 05 May 2023)ECCV 2022 Workshop BIC Blind SubmissionReaders: Everyone
Keywords: Explainable AI, Knowledge Extraction, Electron Microscopy, Neuroscience
TL;DR: A method to visualize class-relevant differences between images.
Abstract: We present a method for deep neural network interpretability by combining feature attribution with counterfactual explanations to generate attribution maps that highlight the most discriminative features between classes. Crucially, this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner, thus preventing potential observer bias. We evaluate the proposed method on six diverse datasets, and use it to discover so far unknown morphological features of synapses in Drosophila melanogaster. We show quantitatively and qualitatively that the highlighted features are substantially more discriminative than those extracted using conventional attribution methods and improve upon similar approaches for counterfactual explainability. We argue that the extracted explanations are better suited for understanding fine grained class differences as learned by a deep neural network, in particular for image domains where humans have little to no visual priors, such as biomedical datasets.
0 Replies

Loading