Post-hoc Saliency Methods Fail to Capture Latent Feature Importance in Time Series DataDownload PDF

Published: 07 Mar 2023, Last Modified: 04 Apr 2023ICLR 2023 Workshop TML4H PosterReaders: Everyone
Keywords: Explainability (XAI), Time Series Classification, Saliency Methods, Latent Feature Importance, Deep Learning
TL;DR: This paper investigates a systematic failure of post-hoc saliency methods for latent-space-dependent time series classification.
Abstract: Saliency methods provide visual explainability for deep image processing models by highlighting informative regions in the input images based on feature-wise (pixels) importance scores. These methods have been adopted to the time series domain, aiming to highlight important temporal regions in a sequence. This paper identifies, for the first time, the systematic failure of such methods in the time series domain when underlying patterns (e.g., dominant frequency or trend) are based on latent information rather than temporal regions. The latent feature importance postulation is highly relevant for the medical domain as many medical signals, such as EEG signals or sensor data for gate analysis, are commonly assumed to be related to the frequency domain. To the best of our knowledge, no existing post-hoc explainability method can highlight influential latent information for a classification problem. Hence, in this paper, we frame and analyze the problem of latent feature saliency detection. We first assess the explainability quality of multiple state-of-the-art saliency methods (Integrated Gradients, DeepLift, Kernel SHAP, Lime) on top of various classification methods (LSTM, CNN, LSTM and CNN trained via saliency guided training) using simulated time series data with underlying temporal or latent space patterns. In conclusion, we identify that Integrated Gradients and DeepLift, if redesigned, could be potential candidates for latent saliency scores.
3 Replies