Delve into the Layer Choice of BP-based Attribution ExplanationsDownload PDF

22 Sept 2022 (modified: 13 Feb 2023)ICLR 2023 Conference Withdrawn SubmissionReaders: Everyone
Keywords: XAI, attribution methods, layer choice, TIF
Abstract: Many issues in attribution methods have been recognized to be related to the choice of target layers, such as class insensitivity in earlier layers and low resolution in deeper layers. However, as the ground truth of the decision process is unknown, the effect of layer selection has not been well-studied. In this paper, we first employ backdoor attacks to control the decision-making process of the model and quantify the influence of layer choice on class sensitivity, fine-grained localization, and completeness. We obtain three observations: (1) We find that energy distributions of the bottom layer attribution are class-sensitive, and the class-insensitive visualizations come from the presence of a large number of class-insensitive low-score pixels. (2) The choice of target layers determines the completeness and the granularity of attributions. (3) We find that single-layer attributions cannot perform well both on the LeRF and MoRF reliability evaluations. To address these issues, we propose TIF (Threshold Interception and Fusion), a technique to combine the attribution results of all layers. Qualitative and quantitative experiments show that the proposed solution is visually sharper and more tightly constrained to the object region than other methods, addresses all issues, and outperforms mainstream methods in reliability and localization evaluations.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Social Aspects of Machine Learning (eg, AI safety, fairness, privacy, interpretability, human-AI interaction, ethics)
TL;DR: We quantified the influence of the layer choice on BP-based attributions. Based on the experimental results, we show how to fuse different layer attributions to obtain complete, fine-grained and reliable attribution results.
Supplementary Material: zip
5 Replies

Loading