SHAP-Based Explanation Methods: A Review for NLP InterpretabilityDownload PDF

Anonymous

17 Dec 2021 (modified: 05 May 2023)ACL ARR 2021 December Blind SubmissionReaders: Everyone
Abstract: Model explanations are crucial for the transparent, safe, and trustworthy deployment of machine learning models. The \emph{SHapley Additive exPlanations} (SHAP) framework is considered by many to be a gold standard for local explanations thanks to its solid theoretical background and general applicability. In the years following its publication, several variants appeared in the literature---presenting adaptations in the core assumptions and target applications. In this work, we review all relevant SHAP-based interpretability approaches available to date and provide instructive examples as well as recommendations regarding their applicability to NLP use cases.
Paper Type: long
Consent To Share Data: yes
0 Replies

Loading