TimeSHAP: Explaining Recurrent Models through Sequence PerturbationsDownload PDF

Anonymous

16 Oct 2020 (modified: 05 May 2023)HAMLETS @ NeurIPS2020Readers: Everyone
Keywords: TimeSHAP, SHAP, explanations, recurrent models, sequences, shapley values
TL;DR: We present TimeSHAP, a posthoc explanation method for recurrent models that produces feature- and event-wise shapley additive explanations.
Abstract: Recurrent neural networks are a standard building block in numerous machine learning domains, from natural language processing to time-series classification. While their application has grown ubiquitous, understanding of their inner workings is still lacking. In practice, the complex decision-making in these models is seen as a black-box, creating a tension between accuracy and interpretability. Moreover, the ability to understand the reasoning process of a model is important in order to debug it and, even more so, to build trust in its decisions. Although considerable research effort has been guided towards explaining black-box models in recent years, recurrent models have received relatively little attention. Any method that aims to explain decisions from a sequence of instances should assess, not only feature importance, but also event importance, an ability that is missing from state-of-the-art explainers. In this work, we contribute to filling these gaps by presenting TimeSHAP, a model-agnostic recurrent explainer that leverages KernelSHAP's sound theoretical footing and strong empirical results. As the input sequence may be arbitrarily long, we further propose a pruning method that is shown to dramatically improve its efficiency in practice.
0 Replies

Loading