Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations

Published: 21 Sept 2024, Last Modified: 06 Oct 2024BlackboxNLP 2024EveryoneRevisionsBibTeXCC BY 4.0
Track: Full paper
Keywords: Post-hoc Explanation, Explainable Machine Learning, Adversarial Attack, Reliability Faithfulness
TL;DR: The paper introduces the notion of adversarial sensitivity in NLP explanations, and investigates the faithfulness of commonly used six post-hoc explainers using the same.
Abstract: Faithfulness is arguably the most critical metric to assess the reliability of explainable AI. In NLP, current methods for faithfulness evaluation are fraught with discrepancies and biases, often failing to capture the true reasoning of models. We introduce Adversarial Sensitivity as a novel approach to faithfulness evaluation, focusing on the explainer's response when the model is under adversarial attack. Our method accounts for the faithfulness of explainers by capturing sensitivity to adversarial input changes. This work addresses significant limitations in existing evaluation techniques, and furthermore, quantifies faithfulness from a crucial yet underexplored paradigm.
Copyright PDF: pdf
Submission Number: 40
Loading