SAFE: Saliency-Aware Counterfactual Explanations for DNN-based Automated Driving SystemsDownload PDFOpen Website

Published: 01 Jan 2023, Last Modified: 05 Nov 2023CoRR 2023Readers: Everyone
Abstract: A CF explainer identifies the minimum modifications in the input that would alter the model's output to its complement. In other words, a CF explainer computes the minimum modifications required to cross the model's decision boundary. Current deep generative CF models often work with user-selected features rather than focusing on the discriminative features of the black-box model. Consequently, such CF examples may not necessarily lie near the decision boundary, thereby contradicting the definition of CFs. To address this issue, we propose in this paper a novel approach that leverages saliency maps to generate more informative CF explanations. Source codes are available at: https://github.com/Amir-Samadi//Saliency_Aware_CF.
0 Replies

Loading