Explaining Image Classification through Knowledge-aware Neuron InterpretationDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Abstract: Although neural networks have achieved remarkable results, they still encounter doubts due to the intransparency. To this end, neural network prediction explanation is attracting more and more attentions. State of the art methods, however, rarely introduce human-understandable external knowledge, making the explanation hard to interpret by human beings. In this paper, we propose a knowledge-aware framework to explain neural network predictions for image scene classification. We introduce two notions of core concepts, with the help of knowledge graphs, to measure the association of concepts with respect to image scenes, and analyze solutions for prediction explanation and model manipulation. In our experiments on two popular scene classification datasets ADE20k and Opensurfaces, the proposed solutions produce better results than baseline and state of the art methods, e.g., our method produces over 25% IoU gain on compositional explanation for neuron behaviors. In addition, our core concepts and related explanation metrics can help effectively manipulate the model prediction, further leading to a new training method with 26.7% performance improvement.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Applications (eg, speech processing, computer vision, NLP)
Supplementary Material: zip
8 Replies

Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview